Virtualization in 4-4 1-4 Data Center Network.

Post on 19-Jun-2015

157 views 4 download

Tags:

description

4-4 1-4 delivers great performance guarantees in traditional (non-virtualized) setting, due to location based static IP address allocation to all network elements. Download this ppt first and then open in powerpoint to view without merged figures and with animations.

Transcript of Virtualization in 4-4 1-4 Data Center Network.

VIRTUALIZATION IN 4-4 1-4 DATA CENTER NETWORKP R E S E N T A T I O N B Y : A N K I T A M A H A J A N

Previous work

Proposed Plan

Experimental setup

Results

Conclusion

Introduction

AGENDA

INTRODUCTION

• Data center network

• Traditional architecture

• Agility

• Virtualization

• 4-4 1-4 Data center network

Fig 1. Traditional Data Center

Fig 3: 4-4 1-4 Data Center

Large clusters of servers interconnected by network switches, concurrently provide large number of different services for different client-organizations.Design Goals:• Availability and Fault tolerance• Scalability• Throughput• Economies of scale• Load balancing• Low Opex

A number of virtual servers are consolidated onto a single physical server. Advantages:• Each customer gets his own VM• Virtualization provides Agility• In case of hardware failure VM

can be cloned & migrated to diff server.

• Synchronized replicated VM images instead of redundant servers

• Easier to test, upgrade and move virtual servers across locations

• Virtual devices in a DCN• Reduced Capex and Opex.

4-4 1-4 ARCHITECTURE

• 4-4 1-4 is a location based forwarding architecture for DCN which utilizes IP-hierarchy.

• Forwarding of packets is done by masking the destination IP address bits.

• No routing or forwarding table maintained at switches

• No convergence overhead.

• Uses statically assigned, location based IP addresses for all network nodes.

A 3-Level 4-4 1-4 Data Center Network

LOCATION-IP BASED ROUTING

MOTIVATION FOR THIS WORK

4-4 1-4 delivers great performance guarantees in traditional (non-virtualized) setting, due to location based static IP address allocation to all network elements.

Agility is essential in current data-centers, run by cloud service providers, to reduces cost by increasing infrastructure utilization.

Server Virtualization provides the required agility.

Whether the 4-4 1-4 network delivers performance guarantees in a virtualized setting, suitable to modern Data Centers, is the major motivation for this work.

PROBLEM STATEMENTHow to virtualize the 4-4 1-4 data center network with the following

constraints:

• Use static IP allocation along with dynamic VMs.

• No modification of network elements or end hosts.

Design Goals: To design a virtualized data center using 4-4 1-4 topology, that is

• Agile

• Scalable and Robust

• Minimize overhead incurred due to virtualization.

• Minimum end-to-end Latency and maximum Throughput

• Suitable for all kinds of data center usage scenarios:• Compute Intensive: HPC• Data Intensive: Video and File Streaming• Balanced: Geographic Information System

PROPOSED SOLUTION

• Separation of Location-IP and VM-IP

• Tunneling at source

• Directory structure

• Query process

• Directory Update mechanism

Packet tunneled through physical network using location-IP header

Packet sending at a server running a type-1 hypervisor

PROPOSED SOLUTION

• Separation of Location-IP and VM-IP

• Tunneling at source

• Directory structure

• Query process

• Directory Update mechanism

Directory structurePhysical Machines = 2^16.Virtual Machines = 2^17 (2 VMs/PM).Virtual Machines = 2^20 (16 VMs/PM).Directory Servers = 64.Number of Update Servers = 16.Hence, one DS per 1024 PMs and one US per (4 * 1024) PMs.This implies 64 DSs, for minimum 131072 VMs

PROPOSED SOLUTION

• Separation of Location-IP and VM-IP

• Tunneling at source

• Directory structure

• Query process

• Directory Update mechanism

Data Structure of Directory

EXPERIMENTAL SET UP

Simulation environment: Extension of NS2/Greencloud:

• Packet-level simulation of communications in DCN, unlike CloudSim, MDCSim, etc.

• DCN Entities are modeled as C++ and OTCL objects.

DCN Workloads: Categories of Experiments setup

• Computation Intensive Workload (CIW): The servers are considerably loaded, but negligible inter server communication.

• Data Intensive Workload (DIW): Huge inter server data transfers, but negligible load at the computing servers

• Balanced Workload (BW): Communication links and computing servers are proportionally loaded.

EXPERIMENTAL SET UP

• In CIW and BW, tasks are scheduled in a Round Robin fashion by the Data Center object, onto VMs on servers fulfilling task resource requirement.• A task is sent to allocated VM by DCobject through core switches.

Output is returned to the same core switch, which then forwards it to the DCobject.

• In DIW and BW, intra-DCN comm or data-transfer is modelled by 1:1:1 TCP flows between servers.

S: Source and Destination within same Level-0D: Source and Destination are in different Level-0 but same

Level-1R: Random selection of Source and Destination pairs inside

Level-1.

SIMULATION PARAMETERS

NAM SCREEN SNAPSHOT

6 4 S E R V E R D C N

PERFORMANCE METRICS

• Average packet delay

• Network Throughput

• End to End Aggregate/data Throughput

• Average hop count

• Packet drop rate

• Normalized Routing overhead

RESULTS: AVERAGE HOP COUNT

RESULTS: COMPUTE INTENSIVE WORKLOAD

• DVR vs LocR in 16 Servers: 50% less Delay and more throughput

• 16 vs 64 Servers: Almost same.

• Routing Overhead in DVR increases with more number of servers.

RESULTS: DATA INTENSIVE WORKLOAD

• Average Packet Delay:

• Network throughput:

• End-to-end aggregate Throughput:

• DVR vs LocR: Less Delay in LocR

• 16 vs 64: Delay reduces by 54%• DVR vs LocR: More in LocR

• 16 vs 64: Increases by 54%

• DVR vs LocR: More in LocR• 16 vs 64: Increases by 53%

RESULTS: BALANCED WORKLOAD

• Average Packet Delay:

• Network throughput:

• End-to-end aggregate Throughput:

• DVR vs LocR: Less Delay in LocR

• 16 vs 64: Delay reduces by 42%• DVR vs LocR: More in LocR

• 16 vs 64: Increases by 42%

• DVR vs LocR: More in LocR• 16 vs 64: Increases by 41%

CONCLUSION

Creation of a packet level simulation prototype in NS2/Greencloud for 4-4 1-4 DCN.

Modelling of compute-intensive, data-intensive and balanced workloads

We conclude that our framework for virtualization in 4-4 1-4 DCN in has the following significance:

• Routing over-head: No convergence overhead in location-based routing

• Networking loops: network is free from networking loops

• Faster hop-by-hop forwarding: as per-packet-per-hop mask operation is faster than table lookup and update operation.

• Efficiency: Location- IP based routing delivers two to ten times more throughput than DVR with same traffic and same topology.

• Scalable: In DIW and BW the performance increases by 50% when number of servers is increased by four times.

LIMITATION

4-4 1-4 is highly scalable in Data Intensive and Balanced workload data centers but moderately for heavy-computing data centers .

In computation intensive workloads, the performance of 4-4 1-4 DCN with location based routing, either remains the same or increases marginally.

FUTURE WORK

Simulation test-bed is ready

Trace-driven workload

Dynamic VM migration

Optimum task Scheduling for 4-4 1-4

Energy consumptio

n

REFERENCES

1. A. Kumar, S. V. Rao, and D. Goswami, “4-4, 1-4: Architecture for Data Center Network Based on IP Address Hierarchy for Efficient Routing," in Parallel and Distributed Computing (ISPDC), 2012 11th International Symposium on, 2012, pp. 235-242.

2. D. Chisnall, The defitive guide to the xen hypervisor, 1st ed. Upper Saddle River, NJ, USA: Prentice Hall Press, 2007.

3. D. Kliazovich, P. Bouvry, and S. Khan, “Greencloud: a packet-level simulator of energy-aware cloud computing data centers," The Journal of Supercomputing, pp.1{21, 2010, 10.1007/s11227-010-0504-1. Available: http://dx.doi.org/10.1007/s11227-010-0504-1

4. “The Network Simulator NS-2," http://www.isi.edu/nsnam/ns/.

DISCUSSIO

N

THANK YOU

There are mysteries in the universe,

We were never meant to solve,

But who we are, and why we are here,

Are not one of them.

Those answers we carry inside.

RESULTS: DATA INTENSIVE WORKLOAD

• Average Packet Delay:

• Network throughput:

• End-to-end aggregate Throughput:

• Routing overhead using dynamic routing:

• DVR vs LocR: Less Delay in LocR

• 16 vs 64: Delay reduces by 54%• DVR vs LocR: More in LocR

• 16 vs 64: Increases by 54%

• DVR vs LocR: More in LocR• 16 vs 64: Increases by 53%

RESULTS: BALANCED WORKLOAD

• Average Packet Delay:

• Network throughput:

• End-to-end aggregate Throughput:

• DVR Routing overhead:

• DVR vs LocR: Less Delay in LocR

• 16 vs 64: Delay reduces by 54%• DVR vs LocR: More in LocR

• 16 vs 64: Increases by 54%

• DVR vs LocR: More in LocR• 16 vs 64: Increases by 53%