Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference...

56
Document Revision 1.2 December 2014 Intel ® Open Network Platform Server Reference Architecture (Release 1.2) NFV/SDN Solutions with Intel ® Open Network Platform Server

Transcript of Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference...

Page 1: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Document Revision 12December 2014

Intelreg Open Network Platform Server Reference Architecture (Release 12)NFVSDN Solutions with Intelreg Open Network Platform Server

Intelreg ONP Server Reference ArchitectureSolutions Guide

2

Revision History

Revision Date Comments

12 December 15 2014 Document prepared for release 12 of Intelreg Open Network Platform Server 12

111 October 29 2014Changed two links to the following bull https01orgsitesdefaultfilespagevbng-scriptstgzbull https01orgsitesdefaultfilespageqat_patches_netkeyshimzip

11 September 18 2014 Minor edits throughout document

10 August 21 2014 Initial document for release of Intelreg Open Network Platform Server 11

3

Intelreg ONP Server Reference ArchitectureSolutions Guide

Contents

10 Audience and Purpose 520 Summary 7

21 Network Services Examples 9211 Suricata (Next Generation IDSIPS engine) 9212 vBNG (Broadband Network Gateway) 9

30 Hardware Components 1140 Software Versions 13

41 Obtaining Software Ingredients 14

50 Installation and Configuration Guide 1551 Instructions Common to Compute and Controller Nodes 15

511 BIOS Settings 15512 Operating System Installation and Configuration16

52 Controller Node Setup 18521 OpenStack (Juno)18

53 Compute Node Setup 23531 Host Configuration23

54 vIPS 26541 Network Configuration for non-vIPS Guests26

60 Testing the Setup 2761 Preparation with OpenStack 27

611 Deploying Virtual Machines 27612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack32

62 Using OpenDaylight 35621 Preparing the OpenDaylightController35

63 Border Network Gateway 36631 Installation and Configuration Inside the VM37632 Installation and Configuration of the Back-to-Back Host (Packet Generator) 39633 Extra Preparations on the Compute Node40

Appendix A Additional OpenDaylight Information43A1 Create VMs using DevStack Horizon GUI 45

Appendix B BNG as an Appliance 51

Appendix C Glossary53

Appendix D References 55

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

NOTE This page intentionally left blank

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Intelreg DPDK Accelerated vSwitch

bull Open vSwitch

bull Fedora 20

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

An understanding of system performance is required to develop solutions that meet the demanding requirements of the telecom industry and transform telecom networks Workload examples are described and are useful for evaluating other NFV workloads

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods However providing a baseline configuration of well tested procedures can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to setup and configure controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg Xeonreg Processor Series E5-2697 V3

bull Intelreg 82599 10 GbE Controller

The host operating system is Fedora 20 with Qemu-kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) Open vSwitch Intelreg DPDK Accelerated vSwitch OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 2: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

2

Revision History

Revision Date Comments

12 December 15 2014 Document prepared for release 12 of Intelreg Open Network Platform Server 12

111 October 29 2014Changed two links to the following bull https01orgsitesdefaultfilespagevbng-scriptstgzbull https01orgsitesdefaultfilespageqat_patches_netkeyshimzip

11 September 18 2014 Minor edits throughout document

10 August 21 2014 Initial document for release of Intelreg Open Network Platform Server 11

3

Intelreg ONP Server Reference ArchitectureSolutions Guide

Contents

10 Audience and Purpose 520 Summary 7

21 Network Services Examples 9211 Suricata (Next Generation IDSIPS engine) 9212 vBNG (Broadband Network Gateway) 9

30 Hardware Components 1140 Software Versions 13

41 Obtaining Software Ingredients 14

50 Installation and Configuration Guide 1551 Instructions Common to Compute and Controller Nodes 15

511 BIOS Settings 15512 Operating System Installation and Configuration16

52 Controller Node Setup 18521 OpenStack (Juno)18

53 Compute Node Setup 23531 Host Configuration23

54 vIPS 26541 Network Configuration for non-vIPS Guests26

60 Testing the Setup 2761 Preparation with OpenStack 27

611 Deploying Virtual Machines 27612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack32

62 Using OpenDaylight 35621 Preparing the OpenDaylightController35

63 Border Network Gateway 36631 Installation and Configuration Inside the VM37632 Installation and Configuration of the Back-to-Back Host (Packet Generator) 39633 Extra Preparations on the Compute Node40

Appendix A Additional OpenDaylight Information43A1 Create VMs using DevStack Horizon GUI 45

Appendix B BNG as an Appliance 51

Appendix C Glossary53

Appendix D References 55

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

NOTE This page intentionally left blank

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Intelreg DPDK Accelerated vSwitch

bull Open vSwitch

bull Fedora 20

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

An understanding of system performance is required to develop solutions that meet the demanding requirements of the telecom industry and transform telecom networks Workload examples are described and are useful for evaluating other NFV workloads

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods However providing a baseline configuration of well tested procedures can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to setup and configure controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg Xeonreg Processor Series E5-2697 V3

bull Intelreg 82599 10 GbE Controller

The host operating system is Fedora 20 with Qemu-kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) Open vSwitch Intelreg DPDK Accelerated vSwitch OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 3: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

3

Intelreg ONP Server Reference ArchitectureSolutions Guide

Contents

10 Audience and Purpose 520 Summary 7

21 Network Services Examples 9211 Suricata (Next Generation IDSIPS engine) 9212 vBNG (Broadband Network Gateway) 9

30 Hardware Components 1140 Software Versions 13

41 Obtaining Software Ingredients 14

50 Installation and Configuration Guide 1551 Instructions Common to Compute and Controller Nodes 15

511 BIOS Settings 15512 Operating System Installation and Configuration16

52 Controller Node Setup 18521 OpenStack (Juno)18

53 Compute Node Setup 23531 Host Configuration23

54 vIPS 26541 Network Configuration for non-vIPS Guests26

60 Testing the Setup 2761 Preparation with OpenStack 27

611 Deploying Virtual Machines 27612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack32

62 Using OpenDaylight 35621 Preparing the OpenDaylightController35

63 Border Network Gateway 36631 Installation and Configuration Inside the VM37632 Installation and Configuration of the Back-to-Back Host (Packet Generator) 39633 Extra Preparations on the Compute Node40

Appendix A Additional OpenDaylight Information43A1 Create VMs using DevStack Horizon GUI 45

Appendix B BNG as an Appliance 51

Appendix C Glossary53

Appendix D References 55

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

NOTE This page intentionally left blank

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Intelreg DPDK Accelerated vSwitch

bull Open vSwitch

bull Fedora 20

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

An understanding of system performance is required to develop solutions that meet the demanding requirements of the telecom industry and transform telecom networks Workload examples are described and are useful for evaluating other NFV workloads

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods However providing a baseline configuration of well tested procedures can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to setup and configure controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg Xeonreg Processor Series E5-2697 V3

bull Intelreg 82599 10 GbE Controller

The host operating system is Fedora 20 with Qemu-kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) Open vSwitch Intelreg DPDK Accelerated vSwitch OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 4: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

NOTE This page intentionally left blank

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Intelreg DPDK Accelerated vSwitch

bull Open vSwitch

bull Fedora 20

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

An understanding of system performance is required to develop solutions that meet the demanding requirements of the telecom industry and transform telecom networks Workload examples are described and are useful for evaluating other NFV workloads

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods However providing a baseline configuration of well tested procedures can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to setup and configure controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg Xeonreg Processor Series E5-2697 V3

bull Intelreg 82599 10 GbE Controller

The host operating system is Fedora 20 with Qemu-kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) Open vSwitch Intelreg DPDK Accelerated vSwitch OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 5: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Intelreg DPDK Accelerated vSwitch

bull Open vSwitch

bull Fedora 20

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

An understanding of system performance is required to develop solutions that meet the demanding requirements of the telecom industry and transform telecom networks Workload examples are described and are useful for evaluating other NFV workloads

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods However providing a baseline configuration of well tested procedures can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to setup and configure controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg Xeonreg Processor Series E5-2697 V3

bull Intelreg 82599 10 GbE Controller

The host operating system is Fedora 20 with Qemu-kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) Open vSwitch Intelreg DPDK Accelerated vSwitch OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 6: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to setup and configure controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg Xeonreg Processor Series E5-2697 V3

bull Intelreg 82599 10 GbE Controller

The host operating system is Fedora 20 with Qemu-kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) Open vSwitch Intelreg DPDK Accelerated vSwitch OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 7: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to setup and configure controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg Xeonreg Processor Series E5-2697 V3

bull Intelreg 82599 10 GbE Controller

The host operating system is Fedora 20 with Qemu-kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) Open vSwitch Intelreg DPDK Accelerated vSwitch OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 8: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration Orchestrator and Controller (management and control plane) and compute node (data plane) run on different server nodes Note that many variations of this setup can be deployed

The test cases described in this document were designed to illustrate certain baseline performance and functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Baseline packet processing (such as data plane) performance with host and VM configurations

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 9: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators ndash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Border Network Gateway for information on setting up and testing the vBNG application with Intelreg DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 10: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 11: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240GB SSD 25in SATA 6Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP) 10 Core 28GHz 115W 25M per core LLC 80 GTs QPI DDR3-1867 HT turboLong product availability

Cores 10 physical coresCPU 20 Hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)Hyper-threading enabled

Table 3-2 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Xeonreg Processor Series E5-2697 v3 26GHz 25MB 145W 14 cores

Haswell 14 Core 26GHz 145W 35M total cache per processor 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 Hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (82599) 2x Intelreg 82599 10 GbE Controller (Niantic) NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644 Release Date 09042014

Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through testsHyper-threading enabled but disabled for benchmark testing

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 12: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

NOTE This page intentionally left blank

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 13: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 20 x86_64 Host OS 3156-200fc20x86_64

Qemu‐kvm Virtualization technology Modified QEMU 162 (bundled with Intelreg DPDK Accelerated vSwitch)

Data Plane Development Kit (DPDK)

Network Stack bypass and libraries for packet processing Includes user space poll mode drivers

171

Intelreg DPDK Accelerated vSwitch

vSwitch v120commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Open vSwitch vSwitch Open vSwitch V 23Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN Orchestrator Juno Release + Intel patches (openstack_ovdkl02-907zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight SDN Controller HeliumSR1

Suricata IPS application Suricata v204 (current Fedora 20 package)

BNG DPPD Broadband Network Gateway DPDK Performance Demonstrator Application

DPPD v013https01orgintel-data-plane-performance-demonstratorsdownloads

PktGen Software Network Package Generator v277

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 14: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdkCommit id 99213f3827bad956d74e2259d06844012ba287a4

All sub-components in one zip file

Intelreg DPDK Accelerated vSwitch (OVDK)

dpdk-ovs qemu ovs-db vswitchd ovs_client (bundled)

httpsgithubcom01orgdpdk-ovsgitCommit id 6210bb0a6139b20283de115f87aa7a381b04670f

v120

Open vSwitch httpsgithubcomopenvswitchovsgitCommit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack Juno release To be deployed using DevStack(see following row)

Three patches downloaded as one tarball Then follow the instructions to deploy the Nodes

DevStack Patches for DevStack and Nova

httpsgithubcomopenstack-devdevstackgitCommit id d6f700db33aeab68916156a98971aef8cfa53a2eThen apply to that commit the patches inhttpsdownload01orgpacket-processingONPS12openstack_ovdkl02-907zip

Two patches downloaded as one tarball Then follow the instructions to deploy

OpenDaylight httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

Intelreg ONP Server Release 12 Script

Helper scripts to setup SRT 12 using DevStack

httpsdownload01orgpacket-processingONPS12onps_server_1_2targz

BNG DPPD Broadband Network Gateway DPDK Performance

https01orgintel-data-plane- performance-demonstratorsdppd-bng- v013zip

PktGen Software Network Package Generator

httpsgithubcomPktgenPktgen-DPDKgitcommit id 5e8633c99e9771467dc26b64a4ff232c7e9fba2a

BNG Helper scripts

Intelreg ONP for Server Configuration Scripts for vBNG

https01orgsitesdefaultfilespagevbng-scriptstgz

Suricata Package from Fedora 20 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 15: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 20 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Enhanced Intel SpeedStep Enabled Disabled

Processor C3 Disabled Disabled

Processor C6 Disabled Disabled

Intelreg Virtualization Technology for Directed IO (Intelreg Vt-d) Disabled Enabled(OpenStack Numa Placement only)

Intel Hyper-Threading Technology (HTT) Enabled Disabled

MLC Streamer Enabled Enabled

MLC Spatial Prefetcher Enabled Enabled

DCU Instruction Prefetcher Enabled Enabled

Direct Cache Access (DCA) Enabled Enabled

CPU Power and Performance Policy Performance Performance

Intel Turbo boost Enabled Off

Memory RAS and Performance Configuration -gt Numa Optimized Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 16: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site

httpfedoraprojectorgenget-fedoraformats

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

2 Burn the ISO file to DVD and create an installation disk

5122 Fedora 20 Installation

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

Also create a user stack and check the box Make this user administrator during the installation The user stack is used in OpenStack installation

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time When using Intelrsquos scripts you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 621

5123 Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation but are required by Intelreg Open Network Platform Software (ONPS) components These packages should be installed by the user

git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff

ONPS supports Fedora kernel 3156 which is newer than native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

1 Download kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 17: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

2 Install kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot system to allow booting into 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your systems

After installing the required packages the operating system should be updated with the following command

yum update -y

This command upgrades to the latest kernel that Fedora supports In order to maintain kernel version (3156) the yum configuration file needs modified with this command

echo exclude=kernel gtgt etcyumconf

before running yum update

After the update completes the system needs to be rebooted

5124 Disable and Enable Services

For OpenStack the following services were disabled selinux firewall and NetworkManager Run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes It is also necessary to use a known NTP server for all nodes Users can edit etcntpconf to add a new server and remove default servers The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 100012g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 18: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

521 OpenStack (Juno)This section documents features and limitations that are supported with the Intelreg DPDK Accelerated vSwitch and OpenStack Juno

5211 Network Requirements

General

At least two networks are required to build OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity as installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is controller node and one or more are compute node(s)

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 For Internet network - Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 For Management network - Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 For Tenant network - Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 For Optional External network - Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the Controller node if external network is configured For Compute node this interface is not needed

Note that among these interfaces interface for virtual network (in this example p1p1) must be an 82599 port because it is used for DPDK and Intelreg DPDK Accelerated vSwitch Also note that a static IP address should be used for interface of management network

In Fedora 20 the network configuration files are located at

etcsysconfignetwork-scripts

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 19: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Note Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

Note 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is just an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack-volumes on a controller node by running the following commands

pvcreate devsdbpvcreate devsdcvgcreate stack-volumes devsdb devsdc

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in this example The following procedure uses an actual example of an installation performed in an Intel test lab consisting of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 20: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Login as su or root user and perform the following

1 Add stack user to sudoer list

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Login as a stack user

2 Configure the appropriate proxies (yum http https and git) for package installation and make sure these proxies are functional Note that on controller node localhost and its IP address should be included in no_proxy setup (for example export no_proxy=localhost1011121)

3 Intelreg DPDK Accelerated vSwitch patches for OpenStack

The tar file openstack_ovdkl02-907zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip

Place the file in the homestack directory and unzip Three patch files devstackpatch novapatch and neutronpatch will be present after unzip

cd homestack wget https01orgsitesdefaultfilespageopenstack_ovdkl02-907zip unzip openstack_ovdkl02-907zip

4 Download DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

5 Check out DevStack with Intelreg DPDK Accelerated vSwitch and patch

cd homestackdevstack git checkout d6f700db33aeab68916156a98971aef8cfa53a2e patch -p1 lt homestackdevstackpatch

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 21: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Download and patch Nova and Neutron

sudo_mkdir optstacksudo_chown stackstack optstack cd optstackgit_clone httpsgithubcomopenstacknovagitgit_clone httpsgithubcomopenstackneutrongitcd optstacknovagit checkout b7738bfb6c2f271d047e8f20c0b74ef647367111patch -p1 lt homestacknovapatch

7 Create localconf file in homestackdevstack

8 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default) In the past Fedora only supported QPID for OpenStack Now it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

e A sample localconf files for controller node is as follows

Controller_node[[local|localrc]]

FORCE=yes ADMIN_PASSWORD=password MYSQL_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service horizon

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

HOST_IP_IFACE=ens2f0PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 22: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocal

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TruePHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1MULTI_HOST=True

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

9 Install DevStack

cd homestackdevstackstacksh

10 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds

11 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

12 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 23: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Please make sure to download and use the onps_server_1_2targz tarball Start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section and this saves you time

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and Intelreg DPDK Accelerated vSwitch using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull Intelreg DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 24: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

mdash The user has option to use ovdk or openvswitch for neutron agent

Q_AGENT=ovdk

or

Q_AGENT=openvswitch

Note For openvswitch the user can specify regular or accelerated openvswitch (accelerated OVS) If accelerated OVS is use the following setup should be added

OVS_DATAPATH_TYPE=netdev

Note If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVDK and accelerated OVS huge pages setting specify number of huge pages to be allocated and mounting point (default is mnthuge)

OVDK_NUM_HUGEPAGES=8192

or

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVDK or Accelerated OVS from their respective repositories Specify the following in the localconf file if OVDK or accelerated OVS is used

OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670fOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash Binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbit

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 25: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

enable_service n-cpuenable_service q-agt

DEST=optstack_LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=ovdkQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVDK_NUM_HUGEPAGES=8192OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

mdash A sample localconf file for compute node with accelerated ovs agent follows

Compute node[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOST

KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_servicesenable_service rabbitenable_service n-cpuenable_service q-agt

DEST=optstack

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 26: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdevOVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=TrueML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

54 vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) use the following

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

541 Network Configuration for non-vIPS Guests1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 In the source add the route to the sink

route add -net 192168200024 eth1

3 At the sink add the route to the source

route add -net 192168100024 eth1

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 27: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) verify the functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparation with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details of how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the examples that follow password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 28: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

6112 Customer Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space among others

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 29: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create network for tenant demo Take the following steps

a Get tenant demo

keystone tenant-list | grep -Fw demo

The following creates a network with a name of net-demo for tenant with ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create instance (VM) for tenant demo Take the following steps

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log into the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click instance name to open Instance Details view then click Console in the top menu to access the VM as follows

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 30: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

6114 Local vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

Figure 6-1 Local vIPS

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 31: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

6115 Remote vIPS

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (provided it is not malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow gets terminated

Figure 6-2 Remote iVPS

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 32: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a Virtual Function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Prepare Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable kernel IOMMU in grub For Fedora 20 run commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

cd libvirt-python-129 python setuppy instal

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 33: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Modify etclibvirtqemuconf add

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an 82599 interface The following example enables 2 VFs for interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep 82599

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep 82599

On Controller node

1 Edit Controller localconf Note that the same localconf file of Section 5213 is used here but adding the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

On Compute node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit Compute localconf for accelerated OVS Note that the same localconf file of Section 5311 is used here

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 34: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

3 Adding the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Removing (or comment out) the following Note that currently SR-IOV pass-through is only supported with a standard OVS)

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Run stacksh for both controller and compute nodes to complete the Devstack installation

6123 Create VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack deatbase

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 To show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo Note that the following example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 35: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

Access the VM from the OpenStack Horizon the new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number For example ens5 If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other as with a normal network

62 Using OpenDaylightThis section describes how to download install and setup a OpenDaylight Controller

621 Preparing the OpenDaylightController1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget httpnexusopendaylightorgcontentrepositoriesopendaylightreleaseorgopendaylightintegrationdistribution-karaf021-Helium-SR1distribution-karaf-021-Helium-SR1targz

2 Extract the archive and cd into it

tar xf distribution-karaf-021-Helium-SR1targzcd distribution-karaf-021-Helium-SR1

3 Use the binkaraf executable start the Karaf shell

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 36: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

4 Install the required features

Karaf might take a long time to start or feature Install might fail if the host does not have network access Yoursquoll need to setup the appropriate proxy settings

63 Border Network GatewayThis section describes how to install and run a Border Network Gateway on a compute node that is prepared as described in Section 51 and Section 53 The example interface names from these sections have been maintained in this section too Also for simplicity the BNG is using the handle_none configuration mode which makes it work as a L2 forwarding engine The BNG is more complex than this and users who are interested to explore more of its capabilities should read https01orgintel-data-plane-performance-demonstratorsquick-overview

The setup to test the functionality of the vBNG follows

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 37: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

631 Installation and Configuration Inside the VM1 Execute the following command

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit grub default configuration

vi etcdefaultgrub

Add hugepages to it

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

5 Rebuild grub config and reboot the system

grub2-mkconfig -o bootgrub2grubcfgreboot

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gccexport OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpyexport DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Re-login or source that file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdkcd dpdkgit checkout v171make install T=$RTE_TARGETmodprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 38: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

10 Check the PCI addresses of the 82599 cards

lspci | grep Network00040 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00050 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00060 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)00070 Ethernet controller Intel Corporation 82599ES 10-Gigabit SFISFP+ Network Connection (rev 01)

11 Make sure that the correct PCI addresses are listed in the script bind_to_igb_uiosh

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance-demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build BNG DPPD application

yum -y install ncurses-develcd dppd-BNG-v013make

15 Refer to Section 633 ldquoExtra Preparations on the Compute Noderdquo before running the BNG application in the VM inside the compute node

16 Make sure that the application starts

builddppd -f confighandle_nonecfg

The handle none configuration should be passing all through traffic between ports which is essentially similar to the L2 forwarding test The config directory contains additional complex BNG configurations and Pktgen scripts Additional BNG specific workloads can be found in the dppd-BNGv013pktgen-scripts directory

Following is a sample graphic of the BNG running in a VM with 2 ports

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 39: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

Exit the application by pressing ESC or CTRL-C

Refer to Section 632 regarding installation and running the software traffic generator

For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2portssh for running PktGen (on its dedicated server) in order to test the handle-none throughput for two physical and two virtual ports yoursquoll need to update the PKTGEN_DIR at the top of the file to point to the right directory which is the following referring to Section 632

PKTGEN_DIR=homestackgitPktgen-DPDKpktgen-64bytessh

632 Installation and Configuration of the Back-to-Back Host (Packet Generator)

The back-to-back host can be any Intelreg Xeonreg processor-based system or it can be any Compute Node that has been prepared using the instructions in Section 51 and Section 53 For simplicity Intel assumes the later was the case Also assume that the git directory for stack user is in homestackgit

1 In the git directory get the source from Github

git clone httpsgithubcomPktgenPktgen-DPDKgitcd Pktgen-DPDK

2 An extra package must be installed for Pktgen to compile correctly

yum -y install libpcap-devel

Pktgen comes with its own distribution of DPDK sources This bundled version of DPDK must be used Note that it contains some WindRiver specific helper libraries that are not in the default DPDK distribution which Pktgen depends on

3 The $RTE_TARGET variable must be set to a specific value Otherwise these libraries will not build

cdvi bashrc

Add the following three lines to the end

export RTE_SDK=$HOMEPktgen-DPDKdpdkexport RTE_TARGET=x86_64-pktgen-linuxapp-gccexport PKTGEN_DIR=$HOMEPktgen-DPDK

4 Re-login or execute the following command

bashrc

5 Build the basic DPDK libraries and extra helpers

cd $RTE_SDKmake install T=$RTE_TARGET

6 Build Pktgen

cd examplespktgenmake

7 Adapt the dpdk_nic_bindpy script accordingly to the actual NICs in use so both interfaces are bound to igb_uio so DPDK can use them See the details of the command the follows

toolsdpdk_nic_bindpy --status

8 Use onps_pktgen-64-bytes-UDP-2portssh from onps_server_1_2targz

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 40: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

9 Now run the script as root after the Compute node has been setup as in Section 633 the VM of the BNG has been prepared as in Section 631 inside the VM and the BNG has been run inside the VM

633 Extra Preparations on the Compute Node1 Do the following as a stack user

cd homestackdevstackvi localconf

2 Comment out the following

PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

And at the same time add the following line right below the previous commented ones

OVS_BRIDGE_MAPPINGS=defaultbr-p1p1physnet1br-p1p2

3 Run again as stack user

unstacksh

stacksh

This causes both physical interfaces to come up and get bound to the DPDK Also a bridge is created on top of each of these interfaces

ovs-vsctl showb52bd3ed-0f6c-45b9-ace1-846d901bed64 Bridge br-p1p1 Port br-p1p1 Interface br-p1p1 type internal Port p1p1 Interface p1p1 type dpdkphy options port=0 Port phy-br-p1p1 Interface phy-br-p1p1 type patch options peer=int-br-p1p1 Bridge br-int fail_mode secure Port int-br-p1p2 Interface int-br-p1p2 type patch options peer=phy-br-p1p2 Port int-br-p1p1 Interface int-br-p1p1 type patch options peer=phy-br-p1p1 Port br-int Interface br-int type internal Bridge br-p1p2 Port phy-br-p1p2 Interface phy-br-p1p2 type patch options peer=int-br-p1p2 Port p1p2 Interface p1p2 type dpdkphy options port=1 Port br-p1p2 Interface br-p1p2

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 41: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

type internal

4 Move the p1p2 physical port under the same bridge as p1p1

ovs-vsctl del-port p1p2ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy optionport=1

5 Delete the agent of OpenStack

rejoin-stackshctrl-a 1ctrl-cctrl-ad

6 Add the dpdkvhost interfaces for the VM

ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost ofport_request=3ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost ofport_request=4

7 Find out the port number of the obstructed interfaces

ovs-ofctl show br-p1p1

The output should be similar to the following Note the number on the left of the interface because its the obstructed port number

OFPT_FEATURES_REPLY (xid=0x2) dpid0000286031010000n_tables254 n_buffers256capabilities FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IPactions OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST 1(phy-br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 2(p1p2) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 3(port3) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps max 4(port4) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max 16(p1p1) addr4904ff7f0000 config 0 state 0 speed 0 Mbps now 0 Mbps max LOCAL(br-p1p1) addr9eae92253cc1 config 0 state 0 speed 0 Mbps now 0 Mbps maxOFPT_GET_CONFIG_REPLY (xid=0x4) frags=normal miss_send_len=0

8 Clean up the flow table of the bridge

ovs-ofctl del-flows br-p1p1

9 Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the other way round

ovs-ofctl add-flow br-p1p1 in_port=16dl_type=0x0800idle_timeout=0action=output3ovs-ofctl add-flow br-p1p1 in_port=3dl_type=0x0800idle_timeout=0action=output16ovs-ofctl add-flow br-p1p1 in_port=4dl_type=0x0800idle_timeout=0action=output2ovs-ofctl add-flow br-p1p1 in_port=2dl_type=0x0800idle_timeout=0action=output4

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 42: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

10 Users can now spawn their vBNG

qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4cores=4threads=1sockets=1 -name VM1 -hda ltpath to the VM image filegt -mem-path devhugepages -mem-prealloc -vnc 2 -daemonize -net nicmodel=virtiomacaddr=001e776809fd -net tapifname=tap1script=nodownscript=no -netdev type=tapid=net1script=nodownscript=noifname=port3vhost=on -device virtio-net-pcinetdev=net1mac=000001000001csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off -netdev type=tapid=net2script=nodownscript=noifname=port4vhost=on -device virtio-net-pcinetdev=net2mac=000001000002csum=offgso=offguest_tso4=offguest_tso6=offguest_ecn=off

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 43: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack Controller + Compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Note Due to a known defect in ODL httpsbugsopendaylightorgshow_bugcgiid=2469 multi-node setup could not be verified

Following is a sample localconf for OpenDaylight host

[[local|localrc]]FORCE=yes

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltmgmt ip isolated from internetgt

PUBLIC_INTERFACE=ltisolated IP could be same as HOST_IP_IFACEgtVLAN_INTERFACE=FLAT_INTERFACE=

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

ODL startQ_HOST=$HOST_IPenable_service odl-serverenable_service odl-computeODL_MGR_IP=1011107ENABLED_SERVICES+=n-apin-crtn-objn-cpun-condn-schn-novncn-cauthn-cauthnovaENABLED_SERVICES+=cinderc-apic-volc-schc-bak

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 44: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

Q_PLUGIN=ml2

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

Here is a sample localconf for Compute Node

[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=ltname of this machinegtHOST_IP=ltip of this machinegtHOST_IP_IFACE=ltisolated interfacegtSERVICE_HOST_NAME=ltname of the controller machinegtSERVICE_HOST=ltip of controller machinegtQ_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=ltip of controller machinegt

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service rabbitenable_service n-cpuenable_service q-agtenable_service odl-compute

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 45: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

ODL_MGR_IP=ltip of controller machinegt

Q_PLUGIN=ml2Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylightQ_ML2_PLUGIN_TYPE_DRIVERS=vxlanOVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev

OVDK_OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=TrueENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vxlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-p1p1

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=$HOST_IP

A1 Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

Login to httpltcontrol node ip addressgt8080 to start horizon gui

Verify that the node shows up in the following GUI

Create a new Vxlan network

1 Click on the Networks tab

2 Click on the Create Network button

3 Enter the Network name then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 46: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

4 Enter the subnet information then click Next

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 47: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

5 Add additional information then click Next

6 Click the Create button

7 Create a VM instance by clicking the Launch Instances button

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 48: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

8 Click on the Details tab to enter VM details

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 49: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click on the Networking tab then enter network information

VMS will now be created

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 50: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their status Adding a string(s) filters the list of bundles List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 51: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B BNG as an Appliance

Please download the latest BNG application from https01orgintel-data-plane-performance-demonstratorsdownloads More details about how BNG works can be found in https01orgintel-data-plane-performance-demonstratorsquick-overview

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 52: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

NOTE This page intentionally left blank

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 53: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 54: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

NOTE This page intentionally left blank

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 55: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL
Page 56: Intel Open Network Platform Server Reference Architecture ... · Intel® ONP Server Reference Architecture Solutions Guide 1.0 Audience and Purpose The primary audiences for this

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2014 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 12)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Fedora 20 Installation
                      • 5123 Additional Packages Installation and Upgrade
                      • 5124 Disable and Enable Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 vIPS
                                            • 541 Network Configuration for non-vIPS Guests
                                                • 60 Testing the Setup
                                                  • 61 Preparation with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Customer Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local vIPS
                                                      • 6115 Remote vIPS
                                                        • 612 Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Prepare Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Create VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylightController
                                                                  • 63 Border Network Gateway
                                                                    • 631 Installation and Configuration Inside the VM
                                                                    • 632 Installation and Configuration of the Back-to-Back Host (Packet Generator)
                                                                    • 633 Extra Preparations on the Compute Node
                                                                        • Appendix A Additional OpenDaylight Information
                                                                          • A1 Create VMs using DevStack Horizon GUI
                                                                            • Appendix B BNG as an Appliance
                                                                            • Appendix C Glossary
                                                                            • Appendix D References
                                                                            • LEGAL