Insiders Guide- Full Business Value of Storage Assets

12
STORAGE VIRTUALIZATION: AN INSIDER’S GUIDE Jon William Toigo CEO Toigo Partners International Chairman Data Management Institute Copyright © 2013 by the Data Management Institute LLC. All Rights Reserved. Trademarks and tradenames for products discussed in this document are the property of their respective owners. Opinions expressed here are those of the author.

Transcript of Insiders Guide- Full Business Value of Storage Assets

Page 1: Insiders Guide- Full Business Value of Storage Assets

1

STORAGE VIRTUALIZATION: AN INSIDER’S GUIDE

Jon William Toigo CEO Toigo Partners International

Chairman Data Management Institute

Copyright © 2013 by the Data Management Institute LLC. All Rights Reserved. Trademarks and tradenames for products

discussed in this document are the property of their respective owners. Opinions expressed here are those of the author.

Page 2: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 2

STORAGE VIRTUALIZATION:

AN INSIDER’S GUIDE

Part 5: Using Virtualization to Deliver the Full Business Value of

Storage Assets

Storage technology is quickly becoming the largest component of

business IT hardware budgets – accounting for between 33 and 70

percent of every dollar spent on infrastructure annually. Annual

operating expenses related to storage infrastructure – administering

capacity, backing up data, performing maintenance, supplying gear with

utility power, etc. – are a significant cost multiplier.

Increasingly, IT management is being challenged to demonstrate the

business value of storage investments. They need to find ways to

demonstrate that they are containing the costs of storage, while at the

same time minimizing risk and improving productivity.

The good news is that storage virtualization can help achieve these goals.

Page 3: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 3

STORAGE VIRTUALIZATION: AN INSIDER’S GUIDE

Using Virtualization to Deliver the Full Business Value of Storage Assets

DEFINING THE BUSINESS VALUE OF STORAGE

The simplistic justification for storage infrastructure investments is that they are required to

meet the ever growing data burgeon. Capacity must be added to provide a location for storing

ever-growing volumes of data created by business applications and end users. Storage costs will

continue to trend “high and to the right” until business management finds ways to reduce the

amount of data that is being stored.

The argument that storage cost increases, like death and taxes, are inevitable, while containing

a kernel of truth, is not a popular one in contemporary business. Management measures

business value of investments in terms of cost-containment, risk reduction and improved

productivity or top line growth. From this perspective, storage seems like an investment with

exceedingly poor business value.

The cost of disk based storage has fallen on a per GB basis consistently since the early 1980s.

About every 18 months, the capacity of a disk drive has doubled, while every 12 months, the

cost of a GB of storage has fallen by half. This dynamic should be making storage of the same

capacity less expensive every year.

Page 4: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 4

Instead, the cost of a storage array – essentially, an aggregation of shelves of disk drives, a

backplane or interconnect, and one or more “controllers” – have actually caused the price per

GB for hard disk to double year after year.

One culprit for the increased cost of storage arrays is specialized on-controller software,

sometimes called “value-add” software. This software is said to add value in the form of special

services it enables across the disk subsystem comprising the hardware array, making it better

suited to specific storage purposes. Whatever the rationale, this software adds cost to the

array in terms of its initial acquisition price, then, over time, in the form of annualized software

licenses and software maintenance agreements.

Specialized storage has long been a fact of life in IT infrastructure. Early on, storage was

designed and priced to handle the access characteristics of the data that was to be stored using

the device. Categories of storage devices, defined as primary, secondary and tertiary,

correlated both with the access times delivered by the product and its cost/capacity. The less

frequently that data was being accessed, the more appropriate it was to store the data on a low

cost, low speed, high capacity device, rather than an expensive, high speed, low capacity

device.

These lines become blurred however, as vendors sought to provide “hybrid devices” to the

consumer, ostensibly to minimize the amount of equipment in the data center, but also to

increase the margin for the vendor on otherwise commodity wares. Examples of hybrid devices

are arrays featuring “on-array tiering” that enabled data migration between faster/lower

Page 5: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 5

capacity disk to slower/higher capacity disk within the same physical cabinet, and arrays

featuring functionality like de-duplication, intended to replace tertiary storage (tape and optical

disc) altogether.

Other cost drivers in storage hardware were associated with topologies and interconnection

models that became popular from time to time. As shown below, the storage industry delivered

appliance and interconnection models over the years that were designed to enable specific

theories of storage deployment and usage. From internal disk, storage expanded outside of the

server into storage systems optimized for block or file data storage. Then, these independent

storage systems found new

interconnection models such as

switched Fibre Channel fabrics,

switched iSCSI, and Network File

System interconnects that

contributed to the deployment of huge

quantities of storage capacity – and

huge cost in the form of

additional connectivity software and

element management software

solutions.

Bottom line: These value-add

software components need to be added to the hardware acquisition cost of storage (and to the

Page 6: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 6

hardware maintenance agreement that is typically purchased with the gear) to arrive at a

calculation of the Capital Expense or CAPEX cost of storage. This, in turn, is an important

number if you want to know what storage is costing your company. Depending on the analyst

one reads, storage hardware today represents between 33 and 70 cents of every dollar spent

on IT hardware.

CAPEX costs, however, are not a full measure of the cost of storage infrastructure. Total cost of

ownership (TCO) in storage includes both the CAPEX cost (divided by the number of years that

the system will be used) and the annualized cost to manage and operate storage infrastructure

in the same timeframe. According to leading industry analysts, the cost to manage storage (its

OPEX or operational expenditure) is between four and six times the annualized CAPEX cost of

storage. OPEX must be combined with CAPEX to achieve a more complete picture of what

storage infrastructure costs a firm to own and to operate.

The operating expenses associated with storage infrastructure are mainly a function of labor

expense, management software expense (including capacity, performance and data protection

management), utility power, and downtime (scheduled and unscheduled). These costs are

indirectly related to two additional – and interconnected – issues: equipment heterogeneity

(fielding equipment from different vendors) and a lack of unified management of storage

infrastructure.

Deploying gear from multiple vendors, where each array supports a different set of isolated

services and each product features its own element management software kit, makes

operations less efficient. One typical outcome: specialized operators (skilled in the operational

nuances of a specific brand and type of array) must be hired when a value-add array is acquired

– increasing labor costs for storage. Another impact of special “value-add” services on each

Page 7: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 7

vendor’s array is that they tend to create barriers to unified management approaches – again

increasing the labor cost for storage. Thirdly, the value-add software that is embedded on

arrays is rarely fully tested for interoperability issues, creating additional risk of downtime.

Not surprisingly, one strategy recommended by analysts for redressing these problems is to

move from heterogeneous infrastructure to homogeneous (that is, buying all storage

infrastructure from a single vendor). The idea is that common management by fewer personnel

is enabled by single-sourcing infrastructure. This assumes, however, that a vendor exists who

offers all of the types of storage sought by a business and that all of these products share a

common management approach (despite the fact that some may have come to be offered by

the vendor through acquisitions of third party developers). The approach also locks in the

consumer to a particular vendor’s storage vision and roadmap, which may not match the

consumer’s own.

Another option is to virtualize storage. That is, to abstract the heterogeneous storage

infrastructure into a virtual resource pool that can be allocated dynamically to meet the needs

of business workload and managed coherently irrespective of the vendor name on the bezel of

each storage array.

Page 8: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 8

While there are several approaches for virtualizing storage, the latest involves the deployment

of a “storage hypervisor” – a type of infrastructure software that aggregates the capacity of

physical storage devices and provides a unified means to manage this resource pool that makes

it easier to supply applications and end users with the storage resources they require and their

data with the protective services they need in a manner that reduces overall labor

requirements and infrastructure costs. DataCore Software™ SANsymphony™-V is a leader in

this technology.

THE ROLE OF STORAGE VIRTUALIZATION IN REALIZING BUSINESS VALUE

Storage virtualization software, like DataCore’s SANsymphony-V storage hypervisor, provides a

means to contain storage costs, reduce storage-related risk, and improve storage-related

productivity. To validate this assertion, each element of the business case for virtualized

storage needs to be examined briefly.

Cost Containment

How does virtualized storage deliver significant reductions in both the CAPEX and OPEX

components of storage total cost of ownership? The following survey provides an overview:

By virtualizing storage, companies can keep gear in service for a longer period of time.

Businesses seeking to “bend the cost curve” in storage are attempting to keep arrays in

service for twice the “useful life” previously sought from infrastructure components –

from three years, to between five and seven years. Doing this reduces the annualized

cost of hardware acquisition by half – except for vendor warranty and maintenance

contract expense. To achieve reductions in warranty costs, consumers may need to

consider third party maintenance contracts to replace vendor contracts, since vendors

Page 9: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 9

typically charge as much to re-up a three-year standard maintenance agreement as they

charge for a new array!

By virtualizing storage, companies can break vendor lock-ins on storage equipment.

Virtualized storage infrastructure delivers greater flexibility to storage planners by

enabling them to leverage the commodity components and technology common to

nearly all storage arrays, while eschewing on-array value-add software that tends to

drive up array costs. (As a rule, most “value add” services can be delivered more cost

effectively across all infrastructure by being hosted at the storage hypervisor layer,

rather than being isolated to a particular array. See below.) This ability, at the very

least, could enable consumers to negotiate better deals on equipment from their

preferred equipment vendors.

By virtualizing storage, companies may be able to buy less expensive gear with fewer

value-add services. Array controllers have evolved from single purpose RAID adapters

into full-fledged servers operating an operating system kernel and value-add service

software. One impact is the dramatic increase in the cost of an otherwise commodity

array. With storage virtualization, value-add services can be delivered across all

infrastructure in a more manageable way -- and in a more granular way to specific data

workload. Thus, organizations may well be able to deploy very inexpensive storage

arrays that yield as robust a capabilities set as much more expensive value-add rigs.

By virtualizing storage, companies may be able to obtain the benefits of new and

expensive hardware components like Flash SSD without the expense. Storage

virtualization, hosted on a server, leverages server DRAM to provide a performance

enhancement to those physical storage arrays that belong to a virtual pool. This fact

may reduce the need to adopt new and expensive technologies like Flash SSD drives on

storage arrays, where they serve as a significant cost accelerator. Storage virtualization

also enables the selection of arrays from vendors that do not require “drive signing”

(customized formatting of drives available only from the vendor and at a huge cost

mark-up) – another plus!

A best of breed storage hypervisor, like DataCore’s SANsymphony-V, can drive down the CAPEX

costs for storage overall, while turning those value-add software functions that companies are

paying so much to obtain on individual arrays – functions like thin provisioning, continuous data

replication, tiering, synchronous and asynchronous replication, and so forth – into scalable

services that can be extended to all storage in the infrastructure. But, a virtualized storage

solution can also minimize OPEX costs – which represent nearly 80% of annualized storage costs

in most organizations today. This value is mainly obtained by centralizing storage management

Page 10: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 10

so that capacity can scale independently of the number of human administrators required to

manage it.

By virtualizing storage, capacity can be managed holistically as one or more storage

resource pools. A storage hypervisor, like DataCore Software SANsymphony-V, enables

all storage resources to be aggregated into one or more storage pools that can be

allocated and de-allocated, expanded and downsized, and thinly provisioned by a lean

staff – often by server or application administrators themselves! The dilemma

commonly associated with heterogeneous storage infrastructure – the need to hire an

expert array administrator for each product that is deployed – is a thing of the past.

By virtualizing storage, performance is managed automatically. In addition to

aggregating and automating the tasks of capacity management, storage virtualization

also aggregates and automates back end storage interconnects and balances I/O load.

SANsymphony-V’s adaptive queuing is a state-of-the-art load balancer, optimizing I/O

performance between the storage hypervisor and physical spindles. Applications are

insulated from this queuing process in any case, as writes are handled directly from

memory on the storage hypervisor host. This is why most users observe a 200% or

greater improvement in application performance after storage is virtualized.

By virtualizing storage, data protection management is a much less labor-intensive task.

Data protection in a non-virtual environment tends to involve a set of manual processes

demanding considerable administrator time. Protection schemes may comprise a

variety of processes delivered on hardware or via third party software. The former,

hardware-based mirrors, are often difficult to monitor, test, and manage, while the

latter, synchronous and asynchronous data replication processes delivered via third-

party software, may require a cadre of operators with specialized skills and knowledge.

With a storage hypervisor like SANsymphony-V, defense in depth strategies can be

applied to virtual volumes and the data they contain simply and conveniently using

check boxes. Mirrored and replicated datasets can be validated at any time, reducing

the traditional difficulty and time requirements associated with these tasks

By virtualizing storage, “scheduled downtime” can finally be relegated to the lexicon of

historical tech terminology. With storage configured as virtual pools using a storage

hypervisor such as SANsymphony-V, and replicated synchronously and/or

asynchronously using DataCore services, storage operations can be redirected to

mirrored infrastructure while maintenance is performed on production infrastructure.

Once completed and validated, mirrored infrastructure can be synchronized with

production infrastructure and I/O can be directed back to production equipment.

Applications and end users will not notice the difference.

Page 11: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 11

By virtualizing storage, power and cooling requirements and costs can be contained and

optimized. According to Dell Computer Corporation and other datacenter operators,

storage is becoming the biggest power consumer in the datacenter. Not only is power

becoming more difficult to source in some regions of the US power grid (especially

where there are a lot of data centers), utility power is also increasing in cost per kilowatt

hour at a rate of about 22% per year. Clearly, getting a handle on storage power

consumption requires buying only what storage capacity we need and using all of it as

efficiently as possible. With physical storage infrastructure, this is a daunting task. With

a virtualized storage infrastructure, and capabilities like thin provisioning across all

spindles, the need for spare capacity and installed spare disks is greatly reduced.

Adding up the potential cost-containment possibilities, some compelling and realizable TCO

reduction goals come into focus. With the ability provided by a storage hypervisor to break

vendor locks, retain older gear longer, loose the cost of on-array controller value add software,

and use less expensive gear, a reduction in storage CAPEX spending of at least 10% is a realistic

possibility. Moreover, early adopters of storage hypervisor products like DataCore

SANsymphony-V are realizing as much as a 40% reduction in storage OPEX costs based on the

lower labor costs, reduced downtime and optimized management capabilities that virtualized

storage delivers.

CONCLUSION OF PART 5

From the TCO perspective, the simplistic notion that increased storage costs are inevitable and

keyed to data growth can be challenged. Truth be told, there are ways to contain storage-

related spending that address both CAPEX and OPEX costs. Storage virtualization can play a key

role.

Page 12: Insiders Guide- Full Business Value of Storage Assets

Copyright © 2013 by The Data Management Institute LLC. All Rights Reserved. 12

Storage virtualization also delivers a full business value in terms of risk reduction (by enabling

better data protection and by facilitating archive through pool-based tiering) and improved

productivity (by expediting the provisioning of storage to applications, and minimizing or

eliminating planned downtime). In short, in a time when management is looking for a business

value case to justify its IT investments, virtualized storage is key to meeting this requirement.