Ceph Day Amsterdam 2015 - Ceph backing the first Government Cloud in the Netherlands
-
Upload
ceph-community -
Category
Technology
-
view
404 -
download
0
Transcript of Ceph Day Amsterdam 2015 - Ceph backing the first Government Cloud in the Netherlands
• 2011, Government of the Netherlands announced a plan to consolidated the current 64 datacenters to 4 or 5 datacenters
• The plan includes the following stages:• Housing
• Hosting
• Application delivery
• Overall goal reduce cost
• In 2013 the Ministry of Education, Culture and Science opened the first ODC in Groningen
Introduction
Ceph day Amsterdam 2015
• January 2014, insource the hosting of the Line of Business applications of the Ministry of Education, Culture and Science
• Business requirements:• 98% availability
• Single site hosting
• Lower the hosting costs
• Improve the quality of service
• Scope:• 200 VM’s (all pets)
• 20 TB
New assignment
Ceph day Amsterdam 2015
• Migrate the non-cloud workload to a cloud infrastructure
• Why?• Cloud is cheaper• Cloud is scalable• Cloud has the future
• Next steps (after migrating OC&W)• Migrate workloads of other Government departments• Introduce new services
The opportunity to start with stage 2 (hosting) and build a solid foundation for stage 3 (application delivery)
Vision
Ceph day Amsterdam 2015
• OpenStack integration
• Commodity hardware
• Block storage and object storage
• Always on• No SPOF• No downtime for expansion, upgrades etc.• Maintenance during office hours
• Low cost per GB (few cents per month, similar to Amazon)
• Swift or GlusterFS or Ceph? • Ceph, out of the box good performance, meets our requirements
Storage requirements
Ceph day Amsterdam 2015
• Set up Ceph cluster • 3 monitors• 12 nodes with 8 drives
• SATADOM• 3 SSD’s, Samsung 840pro 512GB• 5 SATA’s, HGST 4TB• E3-1230v3• 32GB memory• Raid controller 8 ports, 1GB write back cache• Dual headed 10Gbit SFP+
• Smooth deployment, good performance but…• Nodes are 2U high only 8 disks…• Performance SATA’s is not 100% predictable• A lot of storage space, let’s use it, performance impact…
Implementation Q4 2014
Ceph day Amsterdam 2015
• Set up Ceph cluster production• 3 monitors• 24 nodes with 8 drives
• SATADOM• 8 SSD’s, Samsung 850pro 1TB• E3-1230v3• 32GB memory• Dual headed 10Gbit SFP+
• Migration form old nodes to new nodes during office hours• No impact for the user
• One issue (after a couple of days)• Time drift… • Adjust NTP settings, Ceph starts to heal• During a couple of hours the pets were not available (no data was lost)
Redesign Q1 2015
Ceph day Amsterdam 2015
• Setup Ceph cluster preflight• 3 monitors
• 12 nodes with 8 drives• SATADOM
• 3 SSD’s, Samsung 840pro 512GB
• 5 SATA’s, HGST 4TB
• E3-1230v3
• 32GB memory
• Raid controller 8 ports, 1GB write back cache
• Dual headed 10Gbit SFP+
• Preflight is the acceptance environment for ODCN
Preflight
Ceph day Amsterdam 2015
• Ceph meets the requirements• OpenStack integration
• Block storage
• Always on• No SPOF
• No downtime for expansion, upgrades etc.
• Maintance during office hours
• Ceph is self healing
• Ceph is very robust, it takes al lot of effort to bring Ceph down
• D2D management is limited to a check in the morning. Is health ok?
Experience until now
Ceph day Amsterdam 2015
• New Ceph cluster (backup storage, S3 storage)• 5 monitors
• 189 nodes with 12 drives• SATADOM
• 2 SSD’s, Samsung 850pro 128GB
• 10 SATA’s, mix of brands all 4TB
• Atom C2750
• 64GB memory
• Dual headed 10Gbit SFP+
• Expansion of the production cluster for VDI purposes
Next steps Q2 2015
Ceph day Amsterdam 2015
Overview situation mid 2015
Backup/S3 storage5 monitors
189 nodes, 1.890 OSD’sSATA only, > 1.800 TB
Block storage3 monitors
24 nodes, 192 OSD’sSSD only, 48TB
Preflight3 monitors
12 nodes, 96 OSD’sSSD and SATA, > 200TB
Site Liverpoolweg Site Zernikelaan
Ceph day Amsterdam 2015
Backup/S3 storage3 monitors
12 nodes, 120 OSD’sSATA only, 120TB