20121102 ceph-in-the-cloud
description
Transcript of 20121102 ceph-in-the-cloud
![Page 1: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/1.jpg)
Ceph in the Cloud
Using RBD in the cloud
Again, use #cephday :-)
Implemented in CloudStack, OpenStack and Proxmox
Wido den Hollander (42on)
![Page 2: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/2.jpg)
Ceph quick overview
![Page 3: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/3.jpg)
RBD (RADOS Block Device)
● 4MB stripe over RADOS objects● Sparse allocation (TRIM/discard support)
– Qemu with SCSI the driver support trim
– VirtIO lacks necesarry functions
● Snapshotting● Layering
![Page 4: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/4.jpg)
RBD
![Page 5: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/5.jpg)
RBD with multiple VMs
![Page 6: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/6.jpg)
TRIM/discard
● Filesystem like ext4 or btrfs tell the block device which blocks can be discarded
● Only works with Qemu and SCSI drives● Qemu will inform librbd about which blocks
can be discarded
![Page 7: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/7.jpg)
Using TRIM/discard with Qemu
● Add 'discard_granularity=N' option where N is usually 512 (sector size)– This sets the QUEUE_FLAG_DISCARD flag inside
the guest indicating that the device supports discard
● Only supported by SCSI with Qemu– Feature set of SCSI is bigger than VirtIO
![Page 8: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/8.jpg)
Snapshotting
● Normal snapshotting like we are used to– Copy-on-Write (CoW) snapshots
● Snapshots can be created using either libvirt or the rbd tool
● Only integrated into OpenStack, not in CloudStack or Proxmox
![Page 9: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/9.jpg)
Layering
● One parent / golden image● Each child records it's own changes, reads for
unchanged data come from the parent image● Writes go into separate objects● Easily deploy hundreds of identical virtual
machines in a short timeframe without using a lot of space
![Page 10: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/10.jpg)
Layering – Writes
![Page 11: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/11.jpg)
Layering – Reads
![Page 12: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/12.jpg)
RBD in the Cloud?
● High parallel performance due to object striping
● Discard for removing discarded data by virtual machines
● Snapshotting for rollback points in case of problem inside a virtual machine
● Layering for easy and quick deployment– Also saves space!
![Page 13: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/13.jpg)
RBD integrations
● CloudStack● OpenStack● Proxmox
![Page 14: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/14.jpg)
RBD in Proxmox
● Does not use libvirt● RBD integrated v2.2, not in the GUI yet● Snapshotting● No layering at this point
![Page 15: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/15.jpg)
Proxmox demo
● Show a small demo of proxmox● Adding the pool● Creating a VM with a RBD disk
![Page 16: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/16.jpg)
RBD in CloudStack
● Has been integrated in version 4.0● Relies on libvirt● Basic RBD implementation
– No snapshotting
– No layering
– No TRIM/discard
● Still need NFS for SystemVMs
![Page 17: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/17.jpg)
CloudStack demo
● Show a CloudStack demo● Show RBD pool in CloudStack● Create an instance with RBD storage
![Page 18: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/18.jpg)
RBD in OpenStack
● Can use RBD for disk images both boot and data
● Glance has RBD support for storing images● A lot of new RBD work went into Cinder
![Page 19: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/19.jpg)
Using RBD in the Cloud
● Virtual Machines have a random I/O pattern● 70% write, 30% read disk I/O
– Reads are cached by the OSD and the virtual machine itself, so the disks mostly handle writes
– 2x replication means you have to divide your write I/O by 2.
● Use Journaling! (Gregory will tell more later)● Enable the RBD cache (rbd_cache=1)
![Page 20: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/20.jpg)
Is it production ready?
● We think it is!● Large scale deployments out there
– Big OpenStack clusters backed by Ceph
– CloudStack deployments known to be running with Ceph
● It is not “1” or “0”, you will have to evaluate for yourself
![Page 21: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/21.jpg)
Commodity hardware #1
![Page 22: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/22.jpg)
Commodity hardware #2
![Page 23: 20121102 ceph-in-the-cloud](https://reader034.fdocuments.net/reader034/viewer/2022051817/54809a18b479594b578b46dc/html5/thumbnails/23.jpg)
Thank you!
I hope to see a lot of RBD powered clouds in the future!
Questions?