Dell EMC Ready Solution for VMware vCloud NFV 3.0 ... · VMware OpenStack, Open Application...
Transcript of Dell EMC Ready Solution for VMware vCloud NFV 3.0 ... · VMware OpenStack, Open Application...
Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Deployment Manual Architecture Guide for VMware NFV 3.0 with VMware Integrated OpenStack 5.0 with Kubernetes
Dell Engineering February 2019
2 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Revisions
Date Description
October 2018 Initial release
February 2019 Updated
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be the property of their respective owners. Published in the USA.
Dell believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
3 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Table of contents
Revisions............................................................................................................................................................................. 2
Overview ............................................................................................................................................................................. 4
1 Hardware support ......................................................................................................................................................... 5
1.1 Servers ............................................................................................................................................................... 5
1.1.1 Dell EMC PowerEdge R640 / R740 servers ....................................................................................................... 5
1.1.2 Dell EMC PowerEdge R740xd servers .............................................................................................................. 5
1.2 iDRAC ................................................................................................................................................................. 5
2 Solution architecture ..................................................................................................................................................... 6
2.1 Architecture design ............................................................................................................................................. 6
3 Two-pod configuration .................................................................................................................................................. 7
3.1 Management pod ................................................................................................................................................ 7
3.1.1 Components ....................................................................................................................................................... 8
3.2 Edge and resource pod .................................................................................................................................... 10
3.2.1 Components ..................................................................................................................................................... 11
4 Solution bundle network, virtual network design, and topology ................................................................................. 12
4.1 Solution bundle physical network design and topology .................................................................................... 12
4.2 Solution bundle virtual network design and topology ....................................................................................... 14
5 Additional solution bundle features ............................................................................................................................ 17
5.1 Infrastructure monitoring using vROps and vRLI ............................................................................................. 17
5.1.1 vRealize Operations Manager .......................................................................................................................... 17
5.1.2 vRealize Log Insight ......................................................................................................................................... 17
5.2 vSphere HA ...................................................................................................................................................... 18
5.3 DRS and anti-affinity rules ................................................................................................................................ 18
5.4 EVC .................................................................................................................................................................. 18
5.5 vSAN performance service ............................................................................................................................... 18
A Reference Documents ................................................................................................................................................ 19
4 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Overview
This Architecture document is a reference document that describes the design and creation of a greenfield
Network Functions Virtualization (NFV) environment using VMware vCloud NFV OpenStack Edition 3.0 with
NSX-T and Dell EMC PowerEdge servers. The current edition of VMware vCloud NFV OpenStack uses:
NFV Infrastructure (NFVI) with VMware Integrated OpenStack to create a network function Virtualized
Infrastructure Manager (VIM)
VMware OpenStack, Open Application Programming Interface (API) with VMware vCloud NFVI to
create a platform support for Communication Service Providers (CSPs) which realizes the goal of
modernizing the network and transforming the business
5 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
1 Hardware support
1.1 Servers The following server solution support is used in this document:
Dell EMC PowerEdge R640 and R740
Dell EMC PowerEdge 740xd
1.1.1 Dell EMC PowerEdge R640 / R740 servers The Dell EMC PowerEdge R640 / R740 servers are hyper-dense, two-socket, 1U rack servers that are ideal
for cloud computing. With the Dell EMC PowerEdge R640 / R740 servers, you can create an NVMe cache
pool and use it for 2.5-inch or 3.5-inch drives for data storage.
1.1.2 Dell EMC PowerEdge R740xd servers The Dell EMC PowerEdge R740xd servers maintain a balance between storage scalability and performance.
The R740xd has a unique ability to combine any type of drive and create a best possible SSD or HDD
configuration to use either for performance, capacity, or both.
This configuration uses the following switches:
Dell EMC Networking S4048-ON with one switch used to serve as a ToR
Dell EMC Networking Z9100-ON or S6010-ON unit with a pair of either Z9100-ON or S6010-ON
switches used to act as the Leaf switches
1.2 iDRAC The integrated Dell Remote Access Controller (iDRAC) is an intelligence control unit that improves the overall
availability of Dell systems. It sends various alerts to System Administrators related to:
System issues
Remote system management
Reduction in the requirement of physical access to the system
Note: Dell EMC recommends the use the iDRAC9 for remote server administration purpose.
6 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
2 Solution architecture
2.1 Architecture design
Architecture design
The term pod is used in this document to streamline the NFV environment operations and in-line with various
roles. There are two pods that are used in vCloud NFV:
Management pod
Edge and resource pod
This configuration is also seen as a two-pod configuration.
Management pod: This pod manages the NFVI, Virtual Network Functions (VNFs), and its
components. The management pod hosts:
- vCenter Server Appliance
- NSX Manager
- NSX-Controller
- VMware Integrated OpenStack
- AD/DNS
Analytics components such as vRealize Operations Manager and vRealize Log Insight are also
deployed in the management pod.
Edge and resource pod: This pod provides a logical networking presentation between VNFs and
external networks. This pod hosts the NSX Edge as a virtual machine (VM) and manages all the
connectivity to the physical domain within the architecture. The VNFs and VNF managers (VNFMs)
are placed in these pods.
7 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
3 Two-pod configuration This architecture refers a two-pod configuration. A two-pod configuration has two clusters within the pod:
Management pod cluster
Edge and resource pod cluster
Clusters are the vSphere objects that are used to access the virtual domain resources and manage the
resource allocation. As per the requirement, clusters can scale up by adding ESXi hosts, and pods can scale
up by adding new clusters within the existing pods.
During the initial deployment, Dell EMC recommends:
Four servers, consisting of either Dell EMC PowerEdge R640 or R740 servers in the management
pod
Four servers that consist of Dell EMC PowerEdge R740xd servers in the edge and resource pod and
can be extended up to 64 servers
3.1 Management pod The management pod hosts and manages all of the NFV management components.
Management pod
8 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
3.1.1 Components
3.1.1.1 VMware vCenter Server In this configuration, two VMware vCenter Servers are deployed:
One vCenter Server that manages the management pod
One vCenter Server that manages the edge and resource pod
Both vCenter Servers are deployed within the management pod and are implemented as a cluster.
In this configuration, the vCenter with Platform Services Controller (PSC) is deployed and has the PSC
configured with common infrastructure security services such as:
VMware vCenter Single Sign-On
VMware Certificate Authority
Licensing
Service registration
Certificate management services
vCenter Server Appliances are embedded with PSC and related services. In this configuration, vCenter HA
ensures the availability of the vCenter Server Appliance. The vCenter HA cluster consists of following nodes:
Active node - serves the client request
Passive node - acts as a backup node if the Active node crashes
Quorum node – serves as a witness node
Dedicated vCenter High Availability (vCHA) replicates the vCenter Server Appliance data between nodes and
ensures that the data is continuously synchronized and up-to-date. The configuration of the anti-affinity rules
ensures that all of the nodes are on different hosts and protects against host failures.
VMware vCenter Server
3.1.1.2 NSX-T Manager The NSX Manager acts as a management plane for the NSX-T system. The NSX Manager creates,
configures, and monitors the NSX-T components such as:
Logical switches
NSX Edge Nodes
9 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
The NSX-T manager is integrated with management vCenter server. The NSX-T Manager is deployed on the
management pod as a single node to leverage the vSphere HA for high availability. The NSX-Manager uses a
common management network to communicate with the NSX-Controller and its edge clusters.
Management Pod
NSX-TManager
Management Network
ControllerControllerController
Contro ller Cluster
NSX-T manager
3.1.1.3 NSX-T controller The NSX-T controller is an advanced distributed state management system. The NSX-T controller regulates
the virtual networks and overlay transport tunnels. In this configuration, the three NSX-T controllers deployed
in the management pod work to achieve high availability. The configuration of the anti-affinity rules ensures
that all of the controller instances are on different hosts to help protect against host failures.
3.1.1.4 VMware Integrated OpenStack Manager The VMware Integrated OpenStack Manager manages the management pod. It is connected with the vCenter
Server instance. It uses the VM template to deploy the Administrator then manage the operations of VMware
Integrated OpenStack plain components deployed in the management pod.
The deployed VMware Integrated OpenStack Manager connects with the following:
vCenter Server to manage the workloads
NSX Manager associated with tenant networking
The VMware Integrated OpenStack is integrated with NSX-T and provides tenants with enhanced features
and capabilities to manage VNF network requirement using the Horizon interface and APIs. The tenants
provide the NSX-T logical switches with the east-west VNF component connection and the ability to deploy
the NSX-T edge nodes for north-south traffic. This is required when connecting other tenants to external
networks.
The VMware Integrated OpenStack for Kubernetes uses a common infrastructure management layer to
deliver VMs and containers. It builds a highly available Kubernetes clusters to support scalability and multi-
tenancy. The HA Kubernetes cluster consists of:
Load-balanced master nodes
Replicated API servers
Clustered services
10 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
To meet the variable demand of capacity, scale the worker nodes in the Kubernetes clusters, up or down.
VMware Integrated OpenStack
3.1.1.5 VMware vRealize Log Insight VMware vRealize Log Insight is deployed in a single cluster configuration and consists of three nodes to
leverage the Log Insight Integrated Load Balancer (ILB). In a single location, only one log message can be
present within the cluster. In the event that a node is temporarily unavailable, the cluster remains up and
available in an effort to accept data and to acknowledge queries.
Using the syslog protocol or an API, data is collected. The NSX Manager syslog information, distributed
firewall logs, and NSX Edge syslog information, is sent to vRealize Log Insight. In this configuration, anti-
affinity rules are used to keep nodes on specific hosts.
3.1.1.6 VMware vRealize Operations Manager In the configuration used, the VMware vRealize Operation Manager is installed on the management pod and
supports high availability. To protect the management functions of the vRealize Operation Manager, HA
creates a master replica of the vRealize Operation Manager. To enable HA, deployment of one more data
node other than master node, is necessary. In this configuration, anti-affinity rules to keep nodes on specific
hosts are used.
3.1.1.7 Big Cloud Fabric Big Cloud Fabric, or BCF, is a single solution that manages all of the leaf and ToR switches used in this
configuration. The BCF Controller has a user-friendly, web-based GUI, and familiar CLI. Users can use the
industry standard REST API to perform custom orchestration.
For Fault analysis, BCF supports traditional debugging tools such as ping, traceroute, and show commands to
redirect the packets using port mirroring. The BCF Controller also supports unique troubleshooting tools such
as Fabric Test Path and Fabric Analytics, to quickly isolate, identify, and resolve the forwarding and
application faults.
3.2 Edge and resource pod The edge and resource pod provides the virtualized runtime environment, namely compute, network, and
storage environments, to fulfil workloads. The edge and resource pod also combines the NSX edge node to
participate in east-west connection and provides connectivity to the physical infrastructure for north-south
traffic management and capabilities. Edge nodes can be deployed in a VM form factor only.
11 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Edge and resource pod
3.2.1 Components
3.2.1.1 Edge nodes The edge node is used to connect physical NICs to physical infrastructure in the virtual domain. The edge
node provides connectivity to the physical infrastructure and network functionality. Edge nodes are deployed
on VM and leverage the Data Plane Development Kit (DPDK) for faster packet processing and high
performance.
3.2.1.2 Edge cluster The edge nodes are deployed as a pool of the cluster and are dedicated to run network services, however
they cannot be distributed to the hypervisors. This provides scale out, redundancy, and high throughput to the
gateway functionality for logical networks. Only one Tier-0 LR per edge node is available, however, they can
have multiple Tier-1 LRs on one edge node. In an edge cluster, a maximum of eight edge nodes can be
grouped together.
Edge cluster
12 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
4 Solution bundle network, virtual network design, and
topology
4.1 Solution bundle physical network design and topology Figure 8 shows the network topology used in this guide for deployment server:
ToR Switch = top of the rack (S4048T-ON)
Leaf Switch = Z9100-ON or S6010-ON
ToR switch using 10G interface for BCF (p-switch management network) – vmnic0
ToR switch using 10G interface for the ESXI management network – vmnic1
ToR switch using 10G interface for the OOB management network – vmnic2
ToR switch using 10G interface for the customer management private network – vmnic3 (stamp)
In-band 1 using 10G interface – vmnic4
In-band 2 using 10G interface – vmnic5
In-band 3 using 10G interface – vmnic6
In-band 4 using 10G interface – vmnic7
Leaf1 switch using 10G/25G interface for physical switch network
Leaf2 switch using 10G/25G interface for physical switch network
Network connectivity configuration table of Deployment Server
LOM/NDC ports NIC Slot 1 NIC Slot 2
Port number 1 2 3 4 1 2 1 2
VMware VMNIC port reference
vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7
Servers
R640/R740 10G 10G 10G 10G 10G 10G 10G 10G
13 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Deployment server network connections
Figure 9 depicts the Dell EMC PowerEdge 14G nodes network topology:
Note: Big Cloud Fabric (BCF) is on top of the leaf switches running SWL-OS-BCF-4.7.0(0)
14 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Dell EMC PowerEdge 14G nodes network topology
4.2 Solution bundle virtual network design and topology The vCloud NFV platform consist of two networks:
Infrastructure network
Virtual machine network
Infrastructure networks are host-level networks used to connect hypervisors with the physical networks. Each
ESXi host has multiple port groups configured on each infrastructure network.
The VMware vSphere Distributed Switch (VDS) is configured on the hosts in each pod. This provides a similar
network configuration across the multiple hosts. One VDS is used to manage infrastructure network and
another is used to manage VM networks. Additionally, N-VDS is used as a transport for telco workload traffic.
The ESXi hypervisor uses the infrastructure network for Edge overlay, vMotion, and vSAN traffic. The VMs
use the VM network to communicate with each other. In this configuration, two distribution switches, one
switch for infrastructure network and other for VM network, are used to create a separation.
Each distribution switch has a separate uplink connection for physical data center network that separates
uplink traffic from other network traffic. The uplinks are mapped with a pair of physical NICs on each ESXi
host for best performance and resiliency.
15 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
NSX-T creates the VLAN-backed logical switches that allows VNF components and VMs to run on the top of
it. On the ESXi hosts, physical NICs act as uplinks to connect the host virtual switches to the physical switch
(ToR).
The following infrastructure networks are used in the pods:
ESXi management network – network for ESXi host management traffic
Edge overlay – network for edge overlay Tunnel End Point (TEP) connectivity
vMotion network – network for VMware vSphere vMotion traffic
vSAN network – network for vSAN shared storage traffic
Management pod networking consists of the infrastructure and VM networks previously described. The edge
and resource pod virtual network depends on the network topology required by the VNF workloads. In
general, the edge and resource pod has the infrastructure networks, networks for management, control plane
connectivity, and networks for the workloads.
16 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
Solution bundle virtual network design and topology
17 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
5 Additional solution bundle features
5.1 Infrastructure monitoring using vROps and vRLI
5.1.1 vRealize Operations Manager A virtual infrastructure relies on the vRealize Operations Manager monitoring solution to collect health,
capacity, availability, and performance data. The infrastructure also provides a robust and integrated
monitoring platform that sits at the center of the NFV environment.
vRealize Operations Manager is deployed on the management pod to monitor and adjust the necessary host
and storage needs. As the number of metrics and objects are collected over time, additional storage and
compute capacity is required. To address these growing storage requirements, vRealize Operations Manager
adds more hosts to the management cluster, or adds more storage.
The vRealize Operations component design is based on a centralized management collection with an optional
remote collection for a distributed topology. vRealize Operations Manager supports HA and various
components. To protect the management functions of the vRealize Operation Manager, HA creates a master
replica of vRealize Operation Manager.
In smaller deployments, the master node can act as a data node. In larger deployments, data nodes host
adapters and are responsible for collecting and scaling data to meet the additional capacity needs. To enable
HA, the deployment of an additional data node, in addition to the master node, is necessary. In this
configuration, anti-affinity rules to keep nodes on specific hosts, is used.
5.1.2 vRealize Log Insight VMware vRealize Log Insight is used to:
Collect the log data from the ESXi hosts
Connect to the vCenter servers to collect the data of server events, tasks, and alarms
vRealize Log Insight works with the vRealize Operations Manager to send event notifications. As the vRealize
Log Insight collects the log data into unstructured form in real time, elements of the NFV environment can be
configured to send log data. This provides the NFV environment with a single log collector.
VMware vRealize Log Insight is deployed in a single cluster configuration that consists of three nodes
leveraged to the Log Insight ILB. In a single location, only one log message can be present within the cluster.
In the event that a node is temporarily unavailable, the cluster remains up and available to collect the data
and to address the queries.
The data is collected using the syslog protocol or an API. The NSX Manager syslog information, distributed
firewall logs, and NSX edge syslog information, is sent to vRealize Log Insight. In this configuration, anti-
affinity rules keep nodes on specific hosts.
18 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
5.2 vSphere HA Redundancy with vSphere uses VM-level replicas in conjunction with the VNF HA architecture. A redundant
pair for the VNF (and VNF-C dependencies) is managed by vSphere HA whereby it instantiates a new VM in
the event of a host failure. vSphere HA features can be fully automated without need for human intervention
for failure recovery.
5.3 DRS and anti-affinity rules VMware Distributed Resource Scheduler (DRS) cluster is a cluster of ESXi hosts. The VMware DRS cluster is
linked with the VMs on a shared resource and management interface. The creation of DRS clusters before
receiving the advantage of cluster-level resource management, is required. When a host is added to a DRS
cluster, the host resources also becomes the part of the cluster resource. In addition to this cluster of
resources, a DRS cluster supports the cluster-wide resource pool to enforce the cluster-level resource
allocation policies.
vSphere VM-VM affinity rules specify whether the selected individual VMs should run on the same host or be
kept on separate hosts. Set the following:
DRS affinity rules to keep all the VMs on the same host
DRS anti-affinity rule which requires that each VM is on its own host
DRS anti-affinity rule to achieve host level HA. Per this rule, when you run same job on different
hosts, and if one host is down, you will still have other nodes in running and working state.
5.4 EVC Enhanced vMotion Compatibility (EVC) simplifies the vMotion compatibility issues in all of the CPU
generations. It automatically configures the server CPUs with the Intel Flex Migration or AMD-V Extended
Migration technologies to make it compatible with older servers.
In the vCenter Server inventory, once the EVC is enabled, all hosts in the cluster are configured to present
identical CPU features to ensure the CPU compatibility for vMotion. Use the predefined EVC baseline to
determine the features presented by each host. vCenter Server does not allow the additional hosts that
cannot be automatically configured to be compatible with the EVC baseline.
5.5 vSAN performance service vSAN performance services are used to monitor the performance of vSAN cluster, hosts, disks, and VMs.
When the vSAN cluster is created in vSphere, the performance service is disabled but can be enabled and
configured later. To collect the statistical data, vSAN uses a statistical database object. This database is a
namespace object within the vSAN cluster datastore. To enable the vSAN performance service in the cluster,
all the hosts must be running on ESXi 6.5 or later. The cluster must also be correctly configured and must not
have unresolved health problems.
19 Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform
A Reference Documents
See the following documentation for more system information:
Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform –Deployment Manual-
Hardware Guide
Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform – Deployment Manual
Operations Guide
Dell EMC Ready Solution for VMware vCloud NFV 3.0 OpenStack Edition Platform –Deployment Manual
Software Configuration Guide
VMware vCloud NFV OpenStack Edition Reference Architecture for VMware vCloud NFV OpenStack Edition
3.0
Enhanced vMotion Compatibility (EVC) processor support
Dell EMC Big Cloud Fabric Deployment and Best Practices Guide with VMware vSAN