Virtualization Acceleration

13
Motti Beck, Director Enterprise Market Development VMworld 2014 | San Francisco Virtualization Acceleration

Transcript of Virtualization Acceleration

Motti Beck, Director Enterprise Market Development VMworld 2014 | San Francisco

Virtualization Acceleration

© 2014 Mellanox Technologies 2

Virtualization Acceleration is Next

Management

Efficiency

Acceleration

1st Generation 2nd Generation 3rd Generation

Func

tiona

lity

Server Virtualization Network Virtualization Storage Virtualization

1GbE 10GbE 40GbE 100GbE VMDirectPath, SR-IOV Offloads: Network Protocols, VXLAN

On Dashboard for Compute and Storage Software Defined Network Software Defined Storage

Storage & Mobilit

y

© 2014 Mellanox Technologies 3

IO Acceleration

TCP/IP Remote Direct Memory Access (RDMA)

© 2014 Mellanox Technologies 4

RDMA over Converged Ethernet - RoCE

RDMA transport over Ethernet• Efficient, light-weight transport, layered directly over Ethernet• Takes advantage of PFC (Priority Flow Control) in DCB Ethernet • IBTA standard • Supported in OFED 1.5.1, RHEL 6.X, Windows Server 2012 R2

Lowest latency in the Ethernet industry• 1.3µs end-to-end RDMA latency

- Faster application completion- Better server utilization- Higher scalability

Tremendous support momentum by ecosystem• Cloud service providers, DB Vendors, Financial ISVs, Server &

Storage OEMs• Entire Ethernet management ecosystem is available

© 2014 Mellanox Technologies 5

vMotion over RoCE Accelerates vMotion

TCP/IP RDMA0

10

20

30

40

50

60

70

80

70.6272862

45.3119161

36% Faster

0:00 0:05 0:11 0:16 0:21 0:26 0:31 0:36 0:41 0:46 0:51 0:56 1:01 1:06 1:11 1:160

10

20

30

40

50

60

% C

PU U

tiliz

atio

n

90% Less

Destination CPU utilization 92% lower Source CPU utilization 84% lower

CPU Utilization (Destination)Total vMotion Time (seconds)

*Source: VMware’s CTO office VMworld 2012http://cto.vmware.com/wp-content/uploads/2012/09/RDMAonvSphere.pdf

© 2014 Mellanox Technologies 6

10X Boost of Live Migration over Hyper-V with SMB Direct

Source: TechED’13 Opening Satya Nadella Keynote Session with Jose Baretto

0

10

20

30

40

50

60

70

Live Migration Times*

Seco

nds

TCP/IP Compression SMB w/RDMA(no compression)

10X

* Lower is better

Click ToWatch Video

TCP/IP Compression SMB/RDMA

Normalized Send BandwidthHigher is better

% CPU #1Lower is better

% CPU #2Lower is better

150Gb/sec across 3 links

© 2014 Mellanox Technologies 7

Virtualized Storage Acceleration Running iSER* over ESXi

10x Bandwidth Performance Advantage vs TCP/IP2.5x IOPS Performance With iSER Initiator

Test Setup: ESXi 5.0, 2 VMs, 2 LUNS per VM

RDMA Superior Across the Board• Throughput & IOP’s• Efficiency & CPU Utilization• Scalability

* iSCSI over RDMA

Higher isBetter

Higher isBetter

© 2014 Mellanox Technologies 8

vSphere Storage Access Acceleration over RoCE running iSER

Source: https://www.youtube.com/watch?v=uw7UHWWAtig

Dell Fluid Cache for SAN

Boost Performance with Server-based Caching over RoCE

© 2014 Mellanox Technologies 9

RDMA eliminates storage bottlenecks in VDI deployments• Mellanox ConnectX®-3 with RoCE accelerates the access to cache over RDMA• 150 Virtual desktops over RoCE vs. 60 virtual desktops over TCP/IP

Maximize VDI Efficiency over RDMA

ConnectXiSCSI using RDMA

Nytro MegaRAIDFlash Cache

ConnectXiSCSI using RDMA

Nytro MegaRAIDFlash Cache

Active Active

http://www.mellanox.com/related-docs/whitepapers/SB_Virtual_Desktop_Infrastructure_Storage_Acceleration_Final.pdf

Intel 10GbE, iSCSI/TCP

ConnectX3 10GbE, iSCSI/RDMA (iSER)

ConnectX3 40GbE, iSCSI/RDMA (iSER)

0 20 40 60 80 100 120 140 160

Number of Virtual Desktop VMs

iSER Enables 2X More Virtual Desktops

© 2014 Mellanox Technologies 10

Proven Deployment Over Azure

“To make storage cheaper we use lots more network! How do we make Azure Storage scale? RoCE (RDMA over Ethernet) enabled at 40GbE for Windows Azure Storage, achieving massive COGS savings”

RDMA at 40GbE Enables Massive Cloud Saving for Microsoft Azure

Microsoft KeynoteAlbert Greenberg

SDN in Azure Infrastructure

Click to Watch Video

Microsoft Keynote at Open Networking Summit 2014 on RDMA

© 2014 Mellanox Technologies 11

Mellanox Accelerates OpenStack Storage

RDMA Accelerates iSCSI Storage

Hypervisor (KVM)

OSVM

OSVM

OSVM

Adapter

Open-iSCSI w iSER

Compute Servers

Switching Fabric

iSCSI/iSER Target (tgt)

Adapter Local Disks

RDMA Cache

Storage Servers

OpenStack (Cinder)

iSCSI over TCP iSER0

1

2

3

4

5

6

1.3

5.5OpenStack Storage Performance *

GB

ytes

/s

>4X Faster

* iSER patches are available on OpenStack branch:  https://github.com/mellanox/openstack

Built-in OpenStack Components/Management & Cinder/iSER to Accelerate Storage Access

© 2014 Mellanox Technologies 12

Leadership in Strategic Markets with an End to End Portfolio

Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules

End-to-End - Cloud, Enterprise, and Storage Optimized - InfiniBand and Ethernet Portfolio

Metro / WAN

DB/Enterprise Cloud Web 2.0StorageBig Data

Thank You