OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula

20
Hyperconvergence and Opennebula Varadarajan Narayanan Wayz Infratek , Canada Opennebula Tech Day , Waterloo ,10 July 2015

Transcript of OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula

Page 1: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Hyperconvergence and

Opennebula

Varadarajan Narayanan Wayz Infratek , Canada

Opennebula Tech Day , Waterloo ,10 July 2015

Page 2: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

“Hyperconvergence is a type of infrastructure system with a software defined architecture which integrates compute, storage, networking and virtualization and other resources from scratch in a commodity hardware box supported by a single vendor”

Hyperconvergence - Definition

Page 3: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

The players

Page 4: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

● Use of commodity X86 hardware● Scalability ● Enhanced performance● Centralised management● Reliability● Software focused● VM focused● Shared resources ● Data protection

HC - What does it offer ?

Page 5: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

It is a regular server with CPU, RAM, network interfaces, Disk controllers and drives.As far as drives are concerned there are only three manufacturers in the world.There is really nothing special about the hardware. It is all about software….

There is nothing special about storage servers

Page 6: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

● Scale out - Add compute + storage nodes ● Asymmetric scaling -Add only compute nodes● Asymmetric scaling - Add only storage nodes● Fault tolerance and High availability● Add / remove drives on the fly● Take out drives and insert in any other server.● Drive agnostic -any mix of drives SS, spinny● Add servers on the fly and servers need not be identical● Performance increases with capacity increase● Handle IO blender effect - Any application on any server● No special skills are required to manage

Mission impossible ?

Page 7: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Distributed Block storage:Ceph, Gluster, LVM, ZFS, Sheepdog, Amazon EBS

Distributed File systems:MooseFS, LizardFS, Xtreme FS, HDFS

StorPool, OpenVStorage

and other systems.

Storage backend technologies

Page 8: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

StorPool is a storage software installed on every server and controls the drives (both hard disks and SSDs) in the server. Servers communicate with each other to aggregate the capacity and performance of all the drives. StorPool provides standard block devices. Users can create one or more volumes through its volume manager.

Data is replicated and striped across all drives in all servers to provide redundancy and performance. Replication level can be chosen by the user. There are no central management or metadata servers. The cluster uses a shared-nothing architecture. Performance scales with every added server or or drive.

System is managed through a CLI and JSON API.

StorPool overview

Page 9: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

● Fully integrated with Opennebula● Runs on commodity hardware● Clean code built ground up and not a fork of something● End to end data integrity with 64 bit checksum for each sector.● No metadata servers to slow down operations● Own network protocol designed for efficiency and performance● Suitable for hyperconvergence as it uses ~10% of the server

resources of a typical server● Shared nothing architecture for maximum scalability and

performance ● SSD support● In service rolling upgrades● Snapshots, clones, rolling upgrades, QoS, synchronous replication

StorPool - Features

Page 10: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

StorPool

Page 11: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Summary

Performance

Page 12: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Performance

Page 13: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Each test run consists of: 1. Configuring and starting a StorPool or Ceph cluster (3 nodes 12 HDD 3 SSD)2. Creating one 200GB volume 3. Filling the volumes with incompressible data 4. Performing all test cases by running FIO on a client

Comparison with Ceph

Page 14: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

CPU Loads

Comparison with Ceph

Page 15: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Catalog: http://opennebula.org/addons/catalog/

Docs :https://github.com/OpenNebula/addon-storpool/blob/master/README.md

Github :https://github.com/OpenNebula/addon-storpool

Opennebula Integration

Page 16: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Datastore – all common image operations including: define the datastore; create, import images from Marketplace, clone images

Virtual Machines – instantiate a VM with raw disk images on an attached StorPool block device, stop, suspend, start, migrate, migrate-live, snapshot-hot of VM disk.

The add-on is implemented by writing StorPool drivers for datastore_mad and tm_mad and a patch to Sunstone's datastores-tab.js for UI.

Opennebula - StorPool

Page 17: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Built-in HA mode

Page 18: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Will try to connect to a working Opennebula with storpool integrated.

Live demo

Page 19: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

● Wish list of features for hyperconvergent solution based on Opennebula

● Search for a scalable Software Defined Storage solution for Opennebula

● Test results with StorPool SDS ● Built in high availability● Live demo of opennebula integration

Summary

Page 20: OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula

Thank you !