10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

download 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

of 18

Transcript of 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    1/18

    10 Gigabit Ethernet VirtualData Center Architectures

    A Dell Technical White Paper

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    2/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page ii

    THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

    ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR

    IMPLIED WARRANTIES OF ANY KIND.

    2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without

    the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.

    Dell, the DELL logo, and the DELL badge, PowerConnect, and PowerVault are trademarks of Dell Inc.

    Symantec and the SYMANTEC logo are trademarks or registered trademarks of Symantec Corporation or

    its affiliates in the USand other countries. Microsoft, Windows, Windows Server, andActive Directory

    are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or

    other countries. Other trademarks and trade names may be used in this document to refer to either the

    entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in

    trademarks and trade names other than its own.

    September 2011

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    3/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page iii

    ContentsIntroduction ........................................................................................................... 2The Foundation for a Service Oriented Architecture ........................................................... 3Design Principles for the Next Generation Virtual Data Centers .............................................. 4

    This section of the document focuses on the various design aspects of the consolidated and

    virtualized data center POD module. ........................................................................... 8Network Interface Contoller (NIC) Teaming ...................................................................... 8

    NIC Virtualization .................................................................................................. 9Layer 2 Aggregation/Access Switching .......................................................................... 10Layer 3 Aggregation/Access Switching .......................................................................... 11Layer 4-7 Aggregation/Access Switching........................................................................ 12Resource Virtualization Within and Across PODs .............................................................. 12Resource Virtualization Across Data Centers ................................................................... 15Migration from Legacy Data Center Architectures ............................................................ 15Summary ............................................................................................................. 16

    References: ....................................................................................................... 16

    Figures

    Figure 1. Legacy three tier data center architecture........................................................... 3

    Figure 2.Simplified view of virtual machine technology ......................................................5

    Figure 3. Consolidation of data center aggregation and access layers....................................... 6

    Figure 4. Reference design for the virtual data center......................................................... 8

    Figure 5. NIC teaming for data center servers................................................................. 10

    Figure 6. VMware virtual switch tagging with NIC teaming .................................................. 11

    Figure 7. VMware virtual switch tagging with NIC teaming .................................................. 11

    Figure 8. Logical topology for Internet flows in the POD..................................................... 12

    Figure 9. L Re-allocation of server resources within the POD ............................................... 15

    Figure 10. Multiple VLANs per trunk ............................................................................. 15

    Figure 11. Global virtualization .................................................................................. 16

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    4/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 2

    IntroductionConsolidation of data center resources offers an opportunity for architectural transformation based on

    the use of scalable, high density, high availability technology solutions, such as high port-density 10

    GbE switch/routers, cluster and grid computing, blade or rack servers, and network attached storage.

    Consolidation also opens doors for virtualization of applications, servers, storage, and networks. This

    suite of highly complementary technologies has now matured to the point where mainstream adoption

    in large data centers has been occurring for some time.

    According to a recent Yankee Group survey of both large and smaller enterprises, 62% of respondents

    already have a server virtualization solution at least partially in place, while another 21% plan to

    deploy the technology over the next 12 months.

    A consolidated and virtualized 10 GbE data center offers numerous benefits:

    Lower OPEX/CAPEX and TCO through reduced complexity, reductions in the number ofphysical servers and switches, improved lifecycle management, and better human and

    capital resource utilization Increased adaptability of the network to meet changing business requirements Reduced requirements for space, power, cooling, and cabling. For example, in

    power/cooling (P/C) alone, the following savings are possible:

    o Server consolidation via virtualization: up to 5060% of server P/Co Server consolidation via Blade or Rack servers: up to an additional 2030% of

    server P/C

    o Switch consolidation with high density switching: up to 50% of switch P/C Improved business continuance and compliance with regulatory security standards

    The virtualized 10 GbE data center also provides the foundation for a service oriented architecture

    (SOA). From an application perspective, SOA is a virtual application architecture where the application

    is comprised of a set of component services (e.g., implemented with web services) that may be

    distributed throughout the data center or across multiple data centers. SOAs emphasis on application

    modularity and re-use of application component modules enables enterprises to readily create high

    level application services that encapsulate existing business processes and functions, or address new

    business requirements.

    From an infrastructure perspective, SOA is a resource architecture where applications and services

    draw on a shared pool of resources rather than having physical resources rigidly dedicated to specific

    applications. The application and infrastructure aspects of SOA are highly complementary. In terms of

    applications, SOA offers a methodology to dramatically increase productivity in application

    creation/modification, while the SOA-enabled infrastructure, embodied by the 10 GbE virtual data

    center, dramatically improves the flexibility, productivity, and manageability of delivering application

    results to end users by drawing on a shared pool of virtualized computing, storage, and networking

    resources.

    This document provides guidance in designing consolidated, virtualized, and SOA-enabled data centers

    based on the ultra high port-density 10 GbE switch/router products of Force10 Networks in conjunction

    with other specialized hardware and software components provided by Force10 technology partners,

    including those offering:

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    5/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 3

    Server virtualization and server management software iSCSI storage area networks GbE and 10 GbE server NICs featuring I/O virtualization and protocol acceleration Application delivery switching, load balancers, and firewalls

    The Foundation for a Service Oriented Architecture

    Over the last several years data center managers have had to deal with the problem of server sprawl tomeet the demand for application capacity. As a result, the prevalent legacy enterprise data center

    architecture has evolved as a multi-tier structure patterned after high volume websites. Servers are

    organized into three separate tiers of the data center network comprised of web or front-end servers,

    application servers, and database/back-end servers, as shown in Figure 1. This architecture has been

    widely adapted to enterprise applications such as ERP and CRM, that support web-based user access.

    Multiple tiers of physically segregated servers as shown in Figure 1 are frequently employed because a

    single tier of aggregation and access switches may lack the scalability to provide the connectivity and

    aggregate performance needed to support large numbers of servers. The ladder structure of the

    network shown in Figure 1 also minimizes the traffic load on the data center core switches because it

    isolates intra-tier traffic, web-to-application traffic, and application-to-database traffic from the data

    center core.

    Figure 1. Legacy three tier data center architecture

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    6/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 4

    While this legacy architecture has performed fairly well, it has some significant drawbacks. The

    physical segregation of the tiers requires a large number of devices, including three sets of Layer 2

    access switches, three sets of Layer 2/Layer 3 aggregation switches, and three sets of appliances such

    as load balancers, firewalls, IDS/IPS devices, and SSL offload devices that are not shown in the figure.

    The proliferation of devices is further exacerbated by dedicating a separate data center module similar

    to that shown in Figure 1 to each enterprise application, with each server running a single application

    or application component. This physical application/server segregation typically results in servers that

    are, on average, only 20% utilized. This wastes 80% of server capital investment and support costs. As a

    result, the inefficiency of dedicated physical resources per application is the driving force behind on-

    going efforts to virtualize the data center.

    The overall complexity of the legacy design has a number of undesirable side-effects:

    The infrastructure is difficult to manage, especially when additional applications orapplication capacity is required

    Optimizing performance requires fairly complex traffic engineering to ensure thattraffic flows follow predictable paths

    When load-balancers, firewalls, and other appliances are integrated within theaggregation switch/router to reduce box count, it may be necessary to use active-passive redundancy configurations rather than the more efficient active-activeredundancy more readily achieved with stand alone appliances. Designs calling foractive-passive redundancy for appliances and switches in the aggregation layer require

    twice as much throughput capacity as active-active redundancy designs The total cost of ownership (TCO) is high due to low resource utilization levels

    combined with the impact of complexity on downtime and on the requirements forpower, cooling, space, and management time

    Design Principles for the Next Generation Virtual Data CentersThe Force10 Networks approach to next generation data center designs is to build on the legacy

    architectures concept of modularity, but to greatly simplify the network while significantly improving

    its efficiency, scalability, reliability, and flexibility, resulting in much lower low total cost of

    ownership. This is accomplished by consolidating and virtualizing the network, computing, and storage

    resources, resulting in an SOA-enabled data center infrastructure.

    Following are the key principles of data center consolidation and virtualization upon which the Virtual

    Data Center Architecture is based:

    POD Modularity: A POD (point of delivery) is a group of compute, storage, network, and application

    software components that work together to deliver a service or application. The POD is a repeatable

    construct, and its components must be consolidated and virtualized to maximize the modularity,

    scalability, and manageability of data centers. Depending on the architectural model for applications,

    a POD may deliver a high level application service or it may provide a single component of an SOA

    application, such as web front end or database service. In spite of the fact that the POD modules share

    a common architecture, they can be customized to support a tiered services model. For example, the

    security, resiliency/availability, and QoS capabilities of an individual POD can be adjusted to meet the

    service level requirements of the specific application or service that it delivers. Thus, an eCommerce

    POD would be adapted to deliver the higher levels of security/availability/QoS required vs. those

    suitable for lower tier applications, such as email.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    7/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 5

    Server Consolidation and Virtualization: Server virtualization based on virtual machine (VM)

    technology, such as VMware ESX Server, allows numerous virtual servers to run on a single physical

    server, as shown in Figure 2.

    Figure 2. Simplified view of virtual machine technology

    Virtualization provides the stability of running a single application per (virtual) server, while greatly

    reducing the number of physical servers required and improving utilization of server resources. VM

    technology also greatly facilitates the mobility of applications among virtual servers and the

    provisioning of additional server resources to satisfy fluctuations in demand for critical applications.

    Server virtualization and cluster computing are highly complementary technologies for fully exploiting

    emerging multi-core CPU microprocessors. VM technology provides robustness in running multiple

    applications per core plus facilitating mobility of applications across VMs and cores. Cluster computing

    middleware allows multiple VMs or multiple cores to collaborate in the execution of a single

    application. For example, VMware Virtual SMP enables a single virtual machine to span multiple

    physical cores, virtualizing processor-intensive enterprise applications such as ERP and CRM. The

    VMware Virtual Machine File System (VMFS) is a high-performance cluster file system that allows

    clustering of virtual machines spanning multiple physical servers. By 2010, the number of cores per

    server CPU is projected to be in the range of 16-64 with network I/O requirements in the 100 Gbps

    range. Since most near-term growth in chip-based CPU performance will come from higher core count

    rather than increased clock rate, the data centers requiring higher application performance will need

    to place increasing emphasis on technologies such as cluster computing and Virtual SMP.

    NIC Virtualization: With numerous VMs per physical server, network virtualization has to be extended

    to the server and its network interface. Each VM is configured with a virtual NIC that shares theresources of the servers array of real NICs. This level of virtualization, together with a virtual switch

    capability providing inter-VM switching on a physical server, is provided by VMware Infrastructure

    software. Higher performance I/O virtualization is possible using intelligent NICs that provide hardware

    support for I/O virtualization, off-loading the processing supporting protocol stacks, virtual NICs, and

    virtual switching from the server CPUs. NICs that support I/O virtualization as well as protocol offload

    (e.g., TCP/IP, RDMA, iSCSI) are available from Force10 technology partners including NetXen, Neterion,

    Chelsio, NetEffect, and various server vendors. Benchmark results have shown that protocol offload

    NICs can dramatically improve network throughput and latency for both data applications (e.g., HPC,

    clustered databases, and web servers) and network storage access (NAS and iSCSI SANs).

    Network Consolidation and Virtualization: Highly scalable and resilient 10 Gigabit Ethernet

    switch/routers, exemplified by the Force10 E-Series, provide the opportunity to greatly simplify the

    network design of the POD module, as well as the data center core. Leveraging VLAN technology

    together with the E-Series scalability and resiliency allows the distinct aggregation and access layers of

    the legacy data center design to be collapsed into a single aggregation/access layer of switch/routing,

    as shown in Figure 3.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    8/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 6

    Figure 3. Consolidation of data center aggregation and access layers

    The integrated aggregation/access switch becomes the basic network switching element upon which a

    POD is built.

    The benefits of a single layer of switch/routing within the POD include reduced switch-count,

    simplified traffic flow patterns, elimination of Layer 2 loops and STP scalability issues, and improved

    overall reliability. The ultra high density, reliability, and performance of the E-Series switch/router

    maximizes the scalability of the design model both within PODs and across the data center core. The

    scalability of the E-Series often enables network consolidations with a >3:1 reduction in the number of

    data center switches. This high reduction factor is due to the combination of the following factors:

    Elimination of the access switching layer. More servers per POD aggregation switch, resulting in fewer aggregation switches. More POD aggregation switches per core switch, resulting in fewer core switches.

    Storage Resource Consolidation and Virtualization: Storage resources accessible over the

    Ethernet/IP data network further simplify the data center LAN by minimizing the number of separate

    switching fabrics that must be deployed and managed. 10 GbE switching in the POD provides ample

    bandwidth for accessing unified NAS/iSCSI IP storage devices, especially when compared to the

    bandwidth available for Fibre Channel SANs. Consolidated, shared, and virtualized storage also

    facilitates VM-based application provisioning and mobility since each physical server has shared access

    to the necessary virtual machine images and required application data. The VMFS provides multiple

    VMware ESX Servers with concurrent read-write access to the same virtual machine storage. The

    cluster file system thus enables live migration of running virtual machines from one physical server to

    another, automatic restart of failed virtual machines on a different physical server, and the clustering

    of virtual machines.

    Global Virtualization: Virtualization should not be constrained to the confines of the POD, but should

    be capable of being extended to support a pool of shared resources spanning not only a single POD, but

    also multiple PODs, the entire data center, or even multiple data centers. Virtualization of the

    infrastructure allows the PODs to be readily adapted to an SOA application model where the resource

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    9/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 7

    pool is called upon to respond rapidly to changes in demand for services and to new services being

    installed on the network.

    Ultra Resiliency/Reliability: As data centers are consolidated and virtualized, resiliency and

    reliability become even more critical aspects of the network design. This is because the impact of a

    failed physical resource is now more likely to extend to multiple applications and larger numbers of

    user flows. Therefore, the virtual data center requires the combination of ultra high resiliency devices,such as the E-Series switch/routers, and an end-to-end network design that takes maximum advantage

    of active-active redundancy configurations, with rapid fail-over to standby resources.

    Security: Consolidation and virtualization also place increased emphasis on data center network

    security. With virtualization, application or administrative domains may share a pool of common

    resources creating the requirement that the logical segregation among virtual resources be even

    stronger than the physical segregation featured in the legacy data center architecture. This level of

    segregation is achieved by having multiple levels of security at the logical boundaries of the resources

    being protected within the PODs and throughout the data center. In the virtual data center, security is

    provided by:

    Full virtual machine isolation to prevent ill-behaved or compromised applications fromimpacting any other virtual machine/application in the environment

    Application and control VLANs to provide traffic segregation Wire-rate switch/router ACLs applied to intra-POD and inter-POD traffic Stateful virtual firewall capability that can be customized to specific application

    requirements within the POD Security-aware appliances for load balancing and other traffic management and

    acceleration functions

    IDS/IPS appliance functionality at full wire-rate for real-time protection of critical PODresources from both known intrusion methodologies and day-one attacks

    AAA for controlled user access to the network and network devices to enforce policiesdefining user authentication and authorization profiles

    Figure 4 provides an overview of the architecture of a consolidated data center based on 10 Gigabit

    Ethernet switch/routers providing an integrated layer of aggregation and access switching with Layer 4

    Layer 7 services being provided with stand alone appliances. The consolidation of the data center

    network simplifies deployment of virtualization technologies that will be described in more detail in

    subsequent sections of this document.

    Overall data center scalability is addressed by configuring multiple PODs connected to a common set of

    data center core switches to meet application/service capacity, organizational, and policy

    requirements. In addition to server connectivity, the basic network design of the POD can be utilized to

    provide other services on the network, such as ISP connectivity, WAN access, etc.

    Within an application POD, multiple servers running the same application are placed in the same

    application VLAN with appropriate load balancing and security services provided by the appliances.

    Enterprise applications, such as ERP, that are based on distinct, segregated sets of web, application,

    and database servers can be implemented within a single tier of scalable L2/L3 switching using server

    clustering and distinct VLANs for segregation of web servers, application servers, and database servers.

    Alternatively, where greater scalability is required, the application could be distributed across a web

    server POD, an application server POD, and a database POD.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    10/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 8

    Further simplification of the design is achieved using IP/Ethernet storage attachment technologies,

    such as NAS and iSCSI, with each applications storage resources incorporated within the application-

    specific VLAN.

    Figure 4. Reference design for the virtual data center

    This section of the document focuses on the various design aspects of the consolidated and

    virtualized data center POD module.

    Network Interface Contoller (NIC) TeamingAs noted earlier, physical and virtual servers dedicated to a specific application are placed in a VLAN

    reserved for that application. This simplifies the logical design of the network and satisfies the

    requirement of many clustered applications for Layer 2 adjacency among nodes participating in the

    cluster.

    In order to avoid single points of failure (SPOF) in the access portion of the network, NIC teaming is

    recommended to allow each physical server to be connected to two different aggregation/access

    switches. For example, a server with two teamed NICs, sharing a common IP address and MAC address,

    can be connected to both POD switches as shown in Figure 5. The primary NIC is in the active state,

    and the secondary NIC is in standby mode, ready to be activated in the event of failure in the primarypath to the POD.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    11/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 9

    Figure 5. NIC teaming for data center servers

    NIC teaming can also used for bonding several GbE NICs to form a higher speed link aggregation group

    (LAG) connected to one of the POD switches. As 10 GbE interfaces continue to ride the volume/cost

    curve, GbE NIC teaming will become a relatively less cost-effective means of increasing bandwidth per

    server.

    NIC Virtualization

    When server virtualization is deployed, a number of VMs generally share a physical NIC. Where the VMs

    are spread across multiple applications, the physical NIC needs to support traffic for multiple VLANs.

    An elegant solution for multiple VMs and VLANs sharing a physical NIC is provided by VMware ESX Server

    Virtual Switch Tagging (VST). As shown in Figure 6, each VMs virtual NICs are attached to a port group

    on the ESX Server Virtual Switch that corresponds to the VLAN associated with the VMs application.The virtual switch then adds 802.1Q VLAN tags to all outbound frames, extending 802.1Q trunking to

    the server and allowing multiple VMs to share a single physical NIC.

    The overall benefits of NIC teaming and I/O virtualization can be combined with VMware

    Infrastructures ESX Server V3.0 by configuring multiple virtual NICs per VM and multiple real NICs per

    physical server. ESX Server V3.0 NIC teaming supports a variety of fault tolerant and load sharing

    operational modes in addition to the simple primary/secondary teaming model described at the

    beginning of this section. Figure 7 shows how VST, together with simple primary/secondary NIC

    teaming, supports red and green VLANs while eliminating SPOFs in a POD employing server

    virtualization.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    12/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 10

    Figure 6. VMware virtual switch tagging with NIC teaming

    As noted earlier, for improved server and I/O performance, the virtualization of NICs, virtual switching,

    and VLAN tagging can be offloaded to intelligent Ethernet adapters that provide hardware support for

    protocol processing, virtual networking, and virtual I/O.

    Layer 2 Aggregation/Access SwitchingWith a collapsed aggregation/access layer of switching, the Layer 2 topology of the POD is extremely

    simple with servers in each application VLAN evenly distributed across the two POD switches. This

    distributes the traffic across the POD switches, which form an active-active redundant pair. The Layer

    2 topology is free from loops for intra-POD traffic. Nevertheless, for extra robustness, it is

    recommended that application VLANs be protected from loops that could be formed by configuration

    errors or other faults, using standard practices for MSTP/RSTP.

    Figure 7. VMware virtual switch tagging with NIC teaming

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    13/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 11

    The simplicity of the Layer 2 network makes it feasible for the POD to support large numbers of real

    and virtual servers, and also makes it feasible to extend application VLANs through the data center

    core switch/router to other PODs in the data center or even to PODs in other data centers. When

    VLANs are extended beyond the POD, per-LAN MSTP/RSTP is required to deal with possible loops in the

    core of the network.

    In addition, it may also be desirable to allocate applications to PODs in a manner that minimizes dataflows between distinct application VLAN within the POD. This preserves the PODs horizontal bandwidth

    for intra-VLAN communications between clustered servers and for Ethernet/IP-based storage access.

    Layer 3 Aggregation/Access SwitchingFigure 8 shows the logical flow of application traffic through a POD. For web traffic from the Internet,

    traffic is routed in the following way:

    1. Internet flows are routed with OSPF from the core to a VLAN/security zone foruntrusted traffic based on public, virtual IP addresses (VIPs).

    2. Load balancers (LBs) route the traffic to another untrusted VLAN, balancing the trafficbased on the private, real IP addresses of the servers. Redundant load balancers are

    configured with VRRP for gateway redundancy. For load balanced applications, the LBsfunction as the default virtual gateway.

    3. Finally, traffic is routed by firewalls (FWs) to the trusted application VLANs on whichthe servers reside. The firewalls also use VRRP for gateway redundancy. Forapplications requiring stateful inspection of flows but no load balancing, the firewallsfunction as the default virtual gateway.

    Figure 8. Logical topology for Internet flows in the POD

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    14/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 12

    Intranet traffic would be routed through a somewhat different set of VLAN security zones based on

    whether load balancing is needed and the degree of trust placed in the source/destination for that

    particular application flow. In many cases, Intranet traffic would bypass untrusted security zones, with

    switch/router ACLs providing ample security to allow Intranet traffic to be routed through the data

    center core directly from one application VLAN to another without traversing load balancing or firewall

    appliances.

    In addition to application VLANs, control VLANs are configured to isolate control traffic among the

    network devices from application traffic. For example, control VLANs carry routing updates among

    switch/routers. In addition, a redundant pair of load balancers or stateful firewalls would share a

    control VLAN to permit traffic flows to failover from the primary to the secondary appliance without

    loss of state or session continuity. In a typical network design, trunk links carry a combination of traffic

    for application VLANs and link-specific control VLANs.

    From the campus core switches through the data center core switches, there are at least two equal

    cost routes to the server subnets. This permits the core switches to load balance Layer 3 traffic to each

    POD switch using OSPF ECMP routing. Where application VLANs are extended beyond the POD, the

    trunks to and among the data center core switches will carry a combination of Layer 2 and Layer 3

    traffic.

    Layer 4-7 Aggregation/Access SwitchingBecause the POD design is based on stand-alone appliances for Layer 4-7 services (including server load

    balancing, SSL termination/acceleration, VPN termination, and firewalls), data center designers are

    free to deploy devices with best-in-class functionality and performance that meet the particular

    application requirements within each POD. For example, Layer 4-7 devices may support a number of

    advanced features, including:

    Integrated functionality: For example, load balancing, SSL acceleration, and packetfiltering functionality may be integrated within a single device, reducing box count,while improving the reliability and manageability of the POD

    Device Virtualization: Load balancers and firewalls that support virtualization allowphysical device resources to be partitioned into multiple virtual devices, each with itsown configuration. Device virtualization within the POD allows virtual appliances to bedevoted to each application, with the configuration corresponding to the optimumdevice behavior for that application type and its domain of administration

    Active/Active Redundancy: Virtual appliances also facilitate high availabilityconfigurations where pairs of physical devices provide active-active redundancy. For

    example, a pair of physical firewalls can be configured with one set of virtual firewallscustomized to each of the red VLANs and a second set customized for each of the greenVLANs. The physical firewall attached to aPOD switch would have the red firewalls inan active state and its green firewalls in a standby state. The second physical firewall(connected to the second POD switch) would have the complementary configuration. In

    the event of an appliance or link failure, all of the active virtual firewalls on the faileddevice would fail over to the standby virtual firewalls on the remaining device

    Resource Virtualization Within and Across PODsOne of the keys to server virtualization within and across PODs is a server management environment for

    virtual servers that automates operational procedures and optimizes availability and efficiency in

    utilization of the resource pool.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    15/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 13

    The VMware Virtual Center provides the server management function for VMware Infrastructure,

    including ESX Server, VMFS, and Virtual SMP. With Virtual Center, virtual machines can be provisioned,

    configured, started, stopped, deleted, relocated, and remotely accessed. In addition, Virtual Center

    supports high availability by allowing a virtual machine to automatically fail-over to another physical

    server in the event of host failure. All of these operations are simplified because virtual machines are

    completely encapsulated in virtual disk files stored centrally using shared NAS or iSCSI SAN storage. The

    Virtual Machine File System allows a server resource pool to concurrently access the same files to boot

    and run virtual machines, effectively virtualizing VM storage.

    Virtual Center also supports the organization of ESX Servers and their virtual machines into clusters

    allowing multiple servers and virtual machines to be managed as a single entity. Virtual machines can

    be provisioned to a cluster rather than linked to a specific physical host, adding another layer of

    virtualization to the pool of computing resources.

    VMware VMotion enables the live migration of running virtual machines from one physical server to

    another with zero downtime, continuous service availability, complete transaction integrity, and

    continuity of network connectivity via the appropriate application VLAN. Live migration of virtual

    machines enables hardware maintenance without scheduling downtime and resulting disruption of

    business operations. VMotion also allows virtual machines to be continuously and automatically

    optimized within resource pools for maximum hardware utilization, flexibility, and availability.

    VMware Distributed Resource Scheduler (DRS) works with VMware Infrastructure to continuously

    automate the balancing of virtual machine workloads across a cluster in the virtual infrastructure.

    When guaranteed resource allocation cannot be met on a physical server, DRS will use VMotion to

    migrate the virtual machine to another host in the cluster that has the needed resources.

    Figure 9 shows an example of server resource re-allocation within a POD. In this scenario, a group of

    virtual and/or physical servers currently participating in cluster A is re-allocated to a second cluster B

    running another application. Virtual Center and VMotion are used to de-install the cluster A software

    images from the servers being transferred and then install the required cluster B image including

    application, middleware, operating system, and network configuration. As part of the process, the

    VLAN membership of the transferred servers is changed from VLAN A to VLAN B.

    Virtualization of server resources, including VMotion-enabled automated VM failovers, and resource re-

    allocation as described above, can readily be extended across PODs simply by extending the application

    VLANs across the data center core trunks using 802.1Q VLAN trunking. Therefore, the two clusters

    shown in Figure 9 could just as well be located in distinct physical PODs. With VLAN extension, a virtual

    POD can be defined that spans multiple physical PODs. Without this form of POD virtualization, it would

    be necessary to use patch cabling between physical PODs in order to extend the computing resources

    available to a given application. Patch cabling among physical PODs is an awkward solution for ad hoc

    connectivity, especially when the physical PODs are on separate floors of the data center facility.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    16/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 14

    Figure 9. Re-allocation of server resources within the POD

    As noted earlier, the simplicity of the POD Layer 2 network makes this VLAN extension feasible without

    running the risk of STP-related instabilities. With application VLANs and cluster membership extended

    throughout the data center, the data center trunks carry a combination of Layer 3 and Layer 2 traffic,

    potentially with multiple VLANs per trunk, as shown in Figure 10. The 10 GbE links between the PODs

    provide ample bandwidth to support VM clustering, VMotion transfers and failovers, as well as access to

    shared storage resources.

    Figure 10. Multiple VLANs per trunk

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    17/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 15

    Resource Virtualization Across Data CentersResource virtualization can also be leveraged among data centers sharing the same virtual

    architecture. As a result, Virtual Center management of VMotion-based backup and restore operations

    can provide redundancy and disaster recovery capabilities among enterprise data center sites. This

    form of global virtualization is based on an N x 10 GbE Inter-Data Center backbone, which carries a

    combination of Layer 2 and Layer 3 traffic resulting from extending application and control VLANs fromthe data center cores across the 10 GbE MAN/WAN network, as shown in Figure 11.

    Figure 11. Global virtualization

    In this scenario, policy routing and other techniques would be employed to keep traffic as local as

    possible, using remote resources only when local alternatives are not appropriate or not currently

    available. Redundant Virtual Center server management operations centers ensure the availability and

    efficient operation of the globally virtualized resource pool even if entire data centers are disrupted by

    catastrophic events.

    Migration from Legacy Data Center ArchitecturesThe best general approach to migrating from a legacy 3-tier data center architecture to a virtual data

    center architecture is to start at the server level and follow a step-by-step procedure replacing access

    switches, distribution/aggregation switches, and finally data center core switches. One possible

    blueprint for such a migration is as follows:

    1. Select an application for migration. Upgrade and virtualize the applications servers

    with VMware ESX Server software and NICs as required to support the desired NICteaming functionality and/or NIC Virtualization. Install VMware Virtual Center in theNOC.

    2. Replace existing access switches specific to the chosen application with E-Seriesswitch/routers. Establish a VLAN for the application if necessary and configure the E-

    Series switch to conform to the existing access networking model.3. Migrate any remaining applications supported by the set of legacy distribution switches

    in question to E-Series access switches.

  • 8/2/2019 10 Gigabit Ethernet Virtual Data Center Architectures Dell Networking Whitepaper Sep2011

    18/18

    10 Gigabit Ethernet Virtual Data Center Architectures

    Page 16

    4. Transition load balancing and firewall VLAN connectivity to the E-Series along withOSPF routing among the application VLANs. Existing distribution switches still provide

    connectivity to the data center core.5. Introduce new E-Series data center core switch/routers with OSPF and 10 GbE, keeping

    the existing core routers in place. If necessary, configure OSPF in old core switches andre-distribute routes from OSPF to the legacy routing protocol and vice versa.

    6. Remove the set of legacy distribution switches and use the E-series switches for allaggregation/access functions. At this point, a single virtualized POD has been created.7. Now the process can be repeated until all applications and servers in the data center

    have been migrated to integrated PODs. The legacy data center core switches can beremoved either before or after full POD migration.

    SummaryAs enterprise data centers move through consolidation phases toward next generation architectures

    that increasingly leverage virtualization technologies, the importance of very high performance

    Ethernet switch/routers will continue to grow. Switch/routers with ultra high capacity coupled with

    ultra high reliability/resiliency contribute significantly to the simplicity and attractive TCO of the

    virtual data center. In particular, the E-Series offers a number of advantages for this emergingarchitecture:

    Smallest footprint per GbE port or per 10 GbE port due to highest port densities Ultra-high power efficiency requiring only 4.7 watts per GbE port, simplifying high

    density configurations and minimizing the growing costs of power and cooling

    Ample aggregate bandwidth to support unification of aggregation and access layers ofthe data center network plus unification of data and storage fabrics

    System architecture providing a future-proof migration path to the next generation ofEthernet consolidation/virtualization/unification at 100 Gbps

    Unparalleled system reliability and resiliency featuring:o multi-processor control planeo control plane and switching fabric redundancyo modular switch/router operating system (OS) supporting hitless softwareupdates and restarts

    A high performance 10 GbE switched data center infrastructure provides the ideal complement for

    local and global resource virtualization. The combination of these fundamental technologies as

    described in this guide provides the basic SOA-enabled modular infrastructure needed to fully support

    the next wave of SOA application development where an applications component services may be

    transparently distributed throughout the enterprise data center or even among data centers.

    References:

    General discussion of Data Center Consolidation and Virtualization:

    www.force10networks.com/products/pdf/wp_datacenter_convirt.pdf

    E-Series Reliability and Resiliency: www.force10networks.com/products/highavail.asp

    Next Generation Terabit Switch/Routers: www.force10networks.com/products/nextgenterabit.asp

    High Performance Network Security (IPS): www.force10networks.com/products/hp