HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER...

8
OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING HYPER-CONVERGED INFRASTRUCTURE 2.0

Transcript of HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER...

Page 1: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTINGHYPER-CONVERGED INFRASTRUCTURE 2.0

Page 2: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

Whitepaper

HYPER-CONVERGED INFRASTRUCTURE 2.0OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING

HCI | WHERE IT HAS BEEN

In enterprise datacenters, hyper-converged infrastructure (HCI) is today quite commonplace. Since HCI entered the mainstream of datacenter thought a decade ago, manyenterprises have chosen to implement it as a de facto standard approach to common compute tasks.

Moreover, today, enterprises need an approach in implementing what is commonly called the ‘hybrid cloud’ approach to hyper-converged, where the ability to moveworkloads between an on-premises datacenter and a public cloud provider is desired. Hyper-converged 1.0 does provide a modicum of support for this approach, with theability to run virtual machines in public cloud instances, given that a particular hypervisor (the same as inside the on-premises datacenter) is managing the instance. Theability for 1.0 to ‘burst’ to cloud is becoming better understood, and thus the possibility to commit workloads to a full-on hybrid approach is becoming a reality.

Today, this approach – colloquially known as hyper-converged 1.0 – is well-known for its ability to reduce system

administration complexity, as well as provide a simpler method to apply processor, memory, network, and storage

resources to workloads, relative to the ‘three-tier’ structure which was common before it, the era of SAN. The

enterprise world is much better off for having adopted 1.0, without question.

Page 3: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

Whitepaper

HYPER-CONVERGED INFRASTRUCTURE 2.0OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING

HCI | WHERE IS IT GOING

Still, there is a crucial segment of enterprise computing that is not addressed by HCI 1.0 – specifically, the ability to run data-intensive workloads which also requiresignificant compute resources. These workloads place high stress and critical demands on storage infrastructure, in particular, as well as the relationship between memoryresources and storage. Many datacenters today shy away from running these data-intensive workloads on 1.0 infrastructure, for fear that such workloads will degradethemselves, as well as degrade workloads of lesser importance to the enterprise that are today homed on 1.0. Based on years of experience, the industry has come to realizethis fear is well-founded, unfortunately, and thus the need to critically examine 1.0 infrastructure approaches is before us.

The latter, in particular, is paramount to enabling mission-critical, data-intensive workloads to run efficiently and cost-effectively. Most enterprise architects understand thecriticality of storage, but have failed to implement storage in an efficient manner due to the shortcomings of the 1.0 approach.

HCI 2.0 addresses this critical need by applying intelligent design principles to the fundamental ‘building blocks’ of

running workloads – most importantly, the understanding of the relationship between processors, memory, and storage.

IT IS TIME FOR HCI 2.0.

Page 4: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

Whitepaper

HYPER-CONVERGED INFRASTRUCTURE 2.0OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING

HCI 2.0 | HARDWARE ARCHITECTURE

Specifically, 2.0 incorporates significantly more ‘raw’ local storage than does 1.0, on a per-node basis. Today’s data-intensive workloads demand nothing less than this –

because local storage, and access to it, is the most critical determining factor for overall workload efficiency and performance. Put another way, fetching data across a

network - while processors wait for response - is a fundamental flaw in the 1.0 approach. This is the self-limiting factor why 1.0 cannot run high-intensity workloads

efficiently.

In addition, 2.0 incorporates the concept of strictly using what the industry calls ‘high-bin’ processors – those deploying a maximal number of cores per processor, or ‘socket’,

along with relatively high clock rates. In contrast, the 1.0 approach is commonly deployed using ‘mid-bin’ or even ‘low-bin’ processors, which inhibits the system’s ability to

handle workload and virtual machines. Still, one may ask, why is the use of high-bin sockets necessary? What is it about 2.0 that makes it so?

The answer is straightforward. Hyper-converged 2.0 incorporates strictly SSDs, in particular those using NVM Express (NVMe) interfaces - and significantly more of them per

node - up to 72 per node (for example) than 1.0. Those that understand the relationship between cores and NVMe SSDs will quickly realize that running copious amounts of

such devices requires, in turn, copious amounts of processing. A real-world ‘rule-of-thumb’ regarding efficient use of NVMe SSDs is this – such an SSD, servicing significant

I/O activity, can easily consume nearly 100% of a core’s cycles.

Page 5: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

Whitepaper

HYPER-CONVERGED INFRASTRUCTURE 2.0OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING

HCI 2.0 | HARDWARE ARCHITECTURE CONTINUED

This fundamental fact concerning the relationship between sockets and storage devices reveals a major reason why 1.0 infrastructure cannot handle data-intensive

workloads. Such infrastructure is designed to carry minimal NVMe SSDs, if any, and if present, typically used only as caching devices, not for direct data storage. Instead,

they commonly use SAS and/or SATA devices, which are significantly slower, and thus not nearly so core-intense. Given this, it is easy to see why 1.0 infrastructure can (and

does) process mid-level workloads at reasonable efficiencies. However, upon executing significant, data-intensive workloads, the flaw in the 1.0 approach is revealed.

Legacy storage devices combined with mid-bin sockets equals mediocre efficiency and workload performance.

In addition, the use (for example) of 72 SSDs per node, delivering half a petabyte (again for example) of raw capacity provides the sockets a larger pool of devices to balance

the local workload of reads and writes. This enables highly data-intensive workloads to process efficiently and scale easily. Also, the use of high-bin sockets, with high core

counts, enables the ability for compute tasks to process in parallel with extremely high I/O loads – something that is not possible with mid-bin sockets.

The only 1.0 approach to improve the situation is to buy more nodes, upgrade the network, and

spread the workload thinner across the node pool.

This is why the 2.0 approach is optimal – it maintains proper architectural balance between processors, memory, and storage.

Page 6: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

Whitepaper

HYPER-CONVERGED INFRASTRUCTURE 2.0OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING

HCI 2.0 | ECONOMIC BENEFITS

Finally, 2.0 has a very beneficial economic effect. At first, it may seem that the use of high-bin sockets and significant SSD counts per node is counter-productive to economies of scale – after all, such infrastructure is expensive, right? As it happens, the method of typical hypervisor licensing provides an insight into why the 2.0 approach is actually more cost-effective, both in terms of acquisition and operation, than 1.0.

Since licenses are typically sold on a per-socket basis, it behooves one to use the most powerful socket possible that has an efficient cost-per-core ratio, when the cost of the license(s) are considered. Some licenses, as well, only cover the basic hypervisor; virtual storage capability requires a separate license, which again is sold on a per-socket basis.

Fewer nodes means: • fewer NICs, which in turn means • fewer switch ports, which potentially means • fewer switches

The reduction in overall networking costs, both of acquisition and operation (maintenance, support, etc.) is significant. The 2.0 approach can, for example, provide sufficient ‘horsepower’ in a 3-node cluster than a 1.0 cluster of 10 or more nodes. The savings in software licensing, support, maintenance, as well as networking can often be significant, both in acquisition as well as ongoing expense.

There is also a secondary economic effect observed by using high-bin sockets with significant SSDs per node – the 2.0 approach.

This effect is in the reduction of nodes necessary to provide a given amount of workload processing power.

Thus, reducing the overall number of sockets in a given datacenter, while using more powerful sockets, can often reduce overall cost.

Page 7: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

Whitepaper

HYPER-CONVERGED INFRASTRUCTURE 2.0OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING

SUMMARY

It is time for enterprises to consider and implement the hyper-converged 2.0approach – the new era that builds on the decade-long foundation of 1.0. The 2.0approach enables far more powerful, mission-critical, data-intensive workloads to beexecuted efficiently, significantly beyond what 1.0 provides. 2.0 represents whatHCI should have been all along, truth be told – leveraging the best of processor,memory and storage technology, in a dense, powerful, balanced design that enablesoptimal datacenter economics.

Page 8: HYPER-CONVERGED INFRASTRUCTURE 2 · HYPER-CONVERGED INFRASTRUCTURE 2.0 OPTIMIZING YOUR DATACENTER FOR THE NEW ERA OF HYBRID COMPUTING SUMMARY It is time for enterprises to consider

Copyright © 2019 Axellio Inc.2375 Telstar Dr, Suite 150, Colorado Springs, CO 80920 | 800.463.0297

ABOUT AXELLIO INC.

Axellio Inc. is a leading innovator in all NVMe Flash Hyper-Converged Infrastructure (HCI) and Edge Computing systems uniquely designed to run tier 1 storage-intensive workloads. As a spin-off of X-IO Technologies, Axellio carries with it a legacy of twenty years of innovation in enterprise IT infrastructure systems andsolutions providing the highest reliability, quality, service standards, and system performance enterprise infrastructure.

Axellio delivers computing platforms built with its advanced FabricXpress™ (FX) architecture to deliver superior performance, economics, capacity, scalability,space and power utilization, and flexibility. Axellio’s FX design enables organizations to shift data intensive tier 1, performance critical applications and workloadsthat previously required large infrastructures deployed in big, expensive data centers to modern, efficient, simple to manage and scale in a HCI solution withseamless cloud integration, Edge MicroCloud, or Edge MicroDataCenter solution design – for faster response times and greater operational efficiency. TakeAxellio to remote small distributed edge locations, take your simplicity on-premises to a whole new level, or greatly reduce your space and power in a co-lo. Nomatter your organizational goal – if storage density, performance and flexibility help to drive that goal – Axellio has a platform to help you exceed expectations.