Executive Summary Windows Server 2012* marks an … chief rival, VMware. Hyper-V represents only a...
Transcript of Executive Summary Windows Server 2012* marks an … chief rival, VMware. Hyper-V represents only a...
Microsoft Cloud/Virtualized Infrastructure
Executive Summary Windows Server 2012* marks an important milestone for Microsoft for server virtualization: Microsoft has now largely filled the capability gap between its hypervisor (Hyper-V*) and that of its chief rival, VMware. Hyper-V represents only a portion of the virtualization
capabilities of Windows Server 2012. It makes possible network
virtualization, a means by which physical network resources
can be virtually segregated to isolate network traffic going to
virtual machines (VMs). Windows Server 2012 also provides for
pooling of disjoint physical storage disks to create virtual disks
that can increase storage availability and performance. These
features are enterprise-ready and provide a crucial platform to
address the needs of virtual machines in the cloud to be highly
mobile and available.
Microsoft Virtualization and the Private CloudA private cloud refers to a particular on-premises configuration
of virtualized server infrastructure that affords elasticity,
resource pooling, broad network access, metered service,
and user self-service. Services in this cloud infrastructure can be
scaled and assigned resources rapidly, in a transparent manner,
often without intervention by the IT department.
Microsoft’s private-cloud solution does not require Hyper-V to
be installed on the physical servers hosting the cloud. Microsoft
cloud-management tools work across Hyper-V, VMware ESX* and
ESXi*, and Citrix XenServer* hypervisors, even in environments
that combine two or more of these virtualization platforms.
Moreover, Hyper-V supports a range of guest operating systems
(that is, those running in virtual machines), including several
versions of Windows*, Windows Server, and Linux*.
Similar to its improvements with server virtualization, Microsoft has
made improvements in managing virtualized environments running
heterogeneous hypervisors. This is essential for enterprise private
clouds because few large organizations have built private clouds
exclusively using Microsoft products. But though much improved,
Microsoft still trails VMware in this category: it is simply easier
to perform many management tasks on VMware hypervisors
using VMware tools while VMware tools can readily manage both
Microsoft and VMware hypervisors (more on this later).
Figure 1. Microsoft virtualization stack1
Microsoft Cloud/Virtualized Infrastructure
Management for the Microsoft private-cloud solution is
provided by the Microsoft System Center 2012* suite of
applications, specifically:
• System Center 2012 Virtual Machine Manager (VMM),
which provides the fundamental services for creating and
managing clouds. It also provides the technologies used to
deploy and update virtual machines and applications.
• System Center 2012 App Controller, which is a self-service
portal for requests made directly to a private cloud created
with VMM.
• System Center 2012 Service Manager, which provides
automated IT service management and an associated self-
service portal.
• System Center 2012 Orchestrator, which provides a way
to automate interactions among other management tools
such as VMM and Service Manager.
• System Center 2012 Operations Manager, which can
monitor virtual machines, applications, and other aspects
of a private cloud, and then initiate actions to fix problems
it detects.
ContentsExecutive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Microsoft Virtualization and the Private Cloud . . . . . . . 1 Hyper-V* versus VMware . . . . . . . . . . . . . . . . . . . . . . . 3 Managing Public and Private Clouds . . . . . . . . . . . . . . 3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Microsoft Virtualization Stack and Components . . . . . . . 5 Market Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Components of the Microsoft Virtualized Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . 6 Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Live Migration in Windows Server 2012* . . . . . . . . 13The Microsoft Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Microsoft Private Cloud Capabilities . . . . . . . . . . . . . 16 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Support for Heterogeneous Virtualized Environments . . . . . . . . . . . . . . . . . . . . 17 Implications for IT—Managing VMware Hypervisors . . . . . . . . . . . . . . . . . . . . . . . 17 Implications for IT—Heterogeneous Guest Operating System Management . . . . . . . . . . . . . . 19 Public and Hybrid Clouds: Windows Azure* . . . . . . . 20Cloud Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Public Cloud Security. . . . . . . . . . . . . . . . . . . . . . . 22 Private Cloud Security . . . . . . . . . . . . . . . . . . . . . . 22 Public Cloud Identity Management . . . . . . . . . . . . 23 Other Cloud Attributes . . . . . . . . . . . . . . . . . . . . . . 24Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Appendix A: Microsoft and VMware Licensing Breakdown . . . . . . . . . . . . . . . . . . . 27Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Microsoft Cloud/Virtualized Infrastructure
Hyper-V* versus VMwareA traditional theme in the marketing thrust and parry between
Microsoft and VMware has been that of cost, particularly as
Microsoft bundles Hyper-V with Windows Server. Microsoft
has long positioned itself to compete against VMware on
price, and while Microsoft license bundles like the Enrollment
for Core Infrastructure (ECI) license, which combines both the
Windows Server and System Center licenses at a considerable
discount, can still allow large Hyper-V deployments to come in
at roughly half the cost of VMware, much of this price advantage
disappears on small to medium deployments; and nowhere
outside of cherry-picked licensing assumptions is VMware
several times more expensive than Microsoft. Microsoft also
touts performance features in Hyper-V that support cloud
deployments: storage and network virtualization and migration
capabilities. These features underscore a unique aspect of the
Microsoft virtualization offering: beyond providing the tools to
build clouds, it also hosts the Windows Azure* public cloud
in addition to large cloud-based services such as Microsoft
Outlook*, Microsoft Exchange Online*, and Microsoft Office 365*.
Management is a key differentiator between Microsoft and
VMware, however. Microsoft has good options for managing
varied assortments of guest operating systems, but VMware’s
management tools continue to be more efficient for managing
environments running both Microsoft and VMware hypervisors.
The differences between the companies’ management offerings
are even more striking for public-cloud management.
Managing Public and Private CloudsPublic clouds are the extension of cloud-defining attributes
such as elasticity, resource pooling, and user self-service from
proprietary, on-premises cloud deployments to multi-tenancy,
pay-per-use configurations open to the general public. Using
a public cloud or connecting a private cloud to a public cloud
can provide companies a way to quickly and economically add
additional capacity as needed.
Figure 2. System Center 2012 and the Microsoft private cloud1
Microsoft Cloud/Virtualized Infrastructure
System Center provides management tools to span hybrid
configurations of connected private and public clouds, but only
for public clouds that also use System Center for management
(principal among these being Windows Azure). VMware has its
own public cloud offering, but it also has tools to extend the
reach of its management suite into Amazon Web Services*
(AWS*), the market-leading public cloud provider. System Center
Operations Manager does provide a management pack that
enables administrators to monitor the health of applications
running on AWS, but it cannot control those workloads.
ConclusionMicrosoft has incorporated virtualization in the heart of Windows
Server 2012 and has made great strides in catching up with
market-leader VMware. Microsoft’s parity with VMware is
less clear in the case of management tools for private cloud
deployments: System Center 2012 can manage heterogeneous
environments, but VMware virtual machines are still easier
to manage with VMware tools. In the public cloud arena,
scalability might be more of a challenge for Microsoft. Microsoft
has put tremendous effort into making its on-premises server
applications ready for the cloud, but it is unclear that these
applications can scale to the dimensions necessary for cloud
workloads. This was the primary reason that Gartner, in its
August 2013 Magic Quadrant survey of infrastructure-as-
a-service (IaaS) providers, placed Microsoft second only to
Amazon Web Services for completeness of vision but third to
the last for ability to execute.2
VMware and Amazon are formidable competitors for Microsoft
that dominate different aspects of cloud computing and
infrastructure. This contest for market share would not be the
first time that Microsoft has successfully taken on and bested
entrenched incumbents. However, the nature of the cloud arena
poses different challenges in the competitive landscape than
Microsoft has heretofore faced. Public clouds might yet prove
ultimately too unreliable for a number of enterprise workloads
or undesirable to many private and public organizations.
European companies, universities, and governments face
far stricter regulations on data that can be hosted in multi-
tenant environments than organizations in other developed
economies, potentially restricting this Microsoft advantage in
a large IT market. Even where the law permits wider use of
public-cloud infrastructure, revelations of large-scale monitoring
and penetration of large technology companies by national
intelligence agencies might cause large enterprises to choose
to stay with private clouds. More mundane security threats
might also give enterprises pause in embracing public clouds.
Enterprises utilizing public-cloud infrastructure ultimately have to
take their hosts’ words that their security is enterprise class, but
as the February 29, 2012 Windows Azure outage caused by a
leap-year bug with a security certificate demonstrates, even the
majors can make rookie mistakes.
Figure 3. Microsoft public and private cloud continuum1
Microsoft Cloud/Virtualized Infrastructure
Microsoft Virtualization Stack and ComponentsThe fundamental building blocks of Microsoft’s virtualized
infrastructure are Hyper-V and the networking and storage
capabilities of Windows Server 2012 and Windows Server
2012 R2. Microsoft is touting many of the capabilities in this
latest version of Windows Server to compete on performance,
particularly against perennial virtualization rival VMware, but
this does not mean that Microsoft has stopped competing on
price as well.
Market PositioningMicrosoft Claim: Microsoft analysis shows that a VMware
private cloud solution can cost approximately six times more
than a comparable Microsoft private cloud solution over a
period of one to three years.
This claim is not patently false, but Microsoft does engineer
the scenario on which this claim is based to arrive at this
figure at http://www.whymicrosoft.com/Pages/vmware.
aspx. The fine print spells out that the scenario assumes 25
physical hosts with two CPUs and six cores each, with 300
virtual machines at a 6-to-1 consolidation ratio, and with three
years for license and support (but does not factor in Client
Access Licenses). Crucially, the scenario also assumes that
the VMware cost includes Windows Server 2012 Datacenter
edition for the guest operating system. While there can be
some technical reasons for going this route, the move adds
over $1 million to the VMware scenario’s costs (totaling
$1,757,292 for VMware versus $252,800 for the Microsoft
setup). Altering the scenario to use Windows Server 2012
Standard for the guest operating system on the VMware side
decreases the cost to $579,192, for a difference of $339,392,
or a bit more than twice the total cost of the Microsoft
solution (as opposed to the 6 times claimed). For a complete
breakdown of these licensing scenarios, see Appendix A.
The Microsoft price advantage is more pronounced on
larger deployments thanks to Microsoft’s Enrollment for
Core Infrastructure (ECI) license, which bundles in both
the Windows Server and the System Center licenses for a
list price of $5,056 per processor. This licensing strategy
makes little sense for modest deployments; for example, in a
deployment involving three hosts with two processors each
running 20 virtual machines with one year of support, VMware
costs $23,259 versus $18,564 for Microsoft, but $30,336
under ECI. However, ECI can be very advantageous for larger
deployments. Compare that to a mid-range deployment
of 15 hosts with two processors each running 150 virtual
machines with one year of support: VMware costs $178,084
versus $239,745 under conventional Microsoft licensing, but
$151,680 with Microsoft ECI.
The picture in larger deployment scenarios can be more
nuanced. The Microsoft solution (which is unlikely to be
homogenously built on Microsoft products, but is assumed
to be for argument’s sake) for a deployment of 34 hosts with
two processors each running 500 virtual machines would
cost $343,808 using ECI and regardless of the virtual RAM
(vRAM) allotted. A VMware deployment using VMware vSphere
Standard* would cost $536,868. However, the VMware
vSphere Standard license would only permit a vRAM allotment
of 2.1 TB, or 4.3 GB per virtual machine if divided evenly.
VMware vSphere Enterprise licensees would be entitled to a
vRAM pool of 4.3 TB (an average of 8.8 GB per virtual machine)
for $691,636. VMware vSphere Enterprise Plus licenses would
provide a vRAM pool of 6.5 TB (averaging about 13.3 GB per
virtual machine) for $744,336.
Despite the pronounced price difference on larger deployments,
licensing costs alone are unlikely to lure established VMware
shops to Microsoft. On one hand, many IT departments are likely
to view the potential cost of downtime as greater than any savings
achievable with Microsoft’s pricing; Microsoft has yet to shake
the reputation that it cannot provide seamless functionality as
effectively as VMware (particularly with hardcore VMware users).
On the other hand, while Microsoft provides a number of tools
to convert virtual machines from VMware to Hyper-V, these tools
can be cumbersome to use and might not work well with older
or unsupported guest operating systems. System Center VMM is
Microsoft Cloud/Virtualized Infrastructure
probably the easiest tool to use, but among small and medium-
sized companies in particular, the adequate management
capabilities of Windows Server 2012 often mean that they have
not adopted System Center. Outside of VMM, the Microsoft Virtual
Machine Converter is available as a standalone application or as a
plug-in for the VMware vSphere Client, is scalable for data-center-
scale work, and has a Windows PowerShell* automation toolkit
available. A final option is the Virtual Machine Migration Toolkit,
also a large-scale conversion tool.
One market in which price might make a bigger difference for
Microsoft is new deployments. In developed markets, this means
small businesses; in developing markets, this currently means
businesses of all sizes in South America and Asia. And Microsoft
has been gaining market share on VMware: The Wall Street
Journal reported IDC figures in May 2013 indicating that Hyper-V
was installed on 27.6 percent of new servers in 2012, up from
20.3 percent in 2008 (the year the Hyper-V brand was launched).
Though Microsoft is second in the hypervisor market, it is still
well behind VMware, which had a 56.8 percent market share in
2012 (though down from 65.4 percent in 2008).3
Components of the Microsoft Virtualized InfrastructureBeyond pricing, Hyper-V and the networking and storage
capabilities of Windows Server 2012 and Windows Server 2012
R2 provide some remarkable performance enhancements over
previous versions of Windows Server, though these too have
some notable caveats.
Hyper-VMicrosoft Claim: Hyper-V is now enterprise ready—“Windows
Server 2012 with Hyper-V provides you with massive scale to
transform your datacenter into a cloud platform.”4
During and immediately after its launch in 2008, a cottage
industry sprang up to catalog the ways in which Microsoft
Hyper-V was not ready for the enterprise. However, trade
journals and independent consultancies have been widely
positive about the latest version of Microsoft Hyper-V, which
ships with Windows Server 2012, surmounting its previous
deficiencies. Reporting on the results of testing conducted
between November 2012 and June 2013, the Enterprise
Strategy Group (ESG) went so far as to assert that Hyper-V is
ready to virtualize tier-1, business-critical applications.5 Gartner
noted in its June 2013 Magic Quadrant report that Microsoft
“has effectively closed most of the functionality gap with
VMware in terms of x86 server virtualization infrastructure.”6
Comparing raw virtualization features, Microsoft Hyper-V
meets or exceeds the capabilities of both VMware vSphere
Hypervisor—VMware’s free, standalone hypervisor—and VMware
vSphere 5.1 Enterprise Plus, VMware’s top-of-the-line edition.
Feature Windows Server 2012 Hyper-V*
VMware vSphere
Hypervisor*
VMware vSphere 5.1 Enterprise
Plus
Logical Processors
320 160 160
Physical Memory
4 TB 32 GB 2 TB
Virtual CPUs per Host
2,048 2,048 2,048
Virtual CPUs per VM
64 8 64
Memory per VM 1 TB 32 GB 1 TB
Active VMs per Host
1,024 512 512
Guest NUMA Yes Yes Yes
Maximum Nodes
64 N/A 32
Maximum VMs 8,000 N/A 4,000
Table 1. Feature comparison between Windows Server 2012 Hyper-V*, VMware vSphere Hypervisor*, and VMware vSphere 5.1 Enterprise Plus
Microsoft Cloud/Virtualized Infrastructure
Notwithstanding the strong showing for Windows Server
2012 Hyper-V in terms of features, a true apples-to-apples
comparison between Microsoft and VMware remains tricky.
Microsoft bundles the full gamut of virtualization capabilities
into the Windows Server operating system (or with its free,
stand-alone hypervisor, Hyper-V Server 2012*); however,
anything beyond basic management tasks requires purchasing
System Center 2012. By contrast, VMware vSphere includes
its central management system, VMware vCenter Server*,
as part of its deal. However, while companies can virtualize
servers as much as they wish using VMware’s free ESXi
hypervisor, they need to buy VMware vCenter Server to
implement high availability and unlock features such as live
migration, replication, and the distributed virtual switch. These
capabilities and more are available on a sliding scale as you
move up the ladder of VMware vSphere editions. Moreover,
VMware requires that customers bundle at least one year of
Support and Subscription (SnS) with their purchase. These
differences in default packages from both companies can
lead to confusion, even among IT professionals: InfoWorld
ran contradictory articles in the same April 2013 issue, one
declaring that Hyper-V was not ready for the enterprise, the
other that is had largely caught up with VMware. These articles
were only reconciled by an editor’s note that explained that the
negative tests had not been run with System Center included in
the test deployment.
NetworkingMicrosoft Claim: Microsoft Software-Defined Networking
(SDN) is ready for enterprise workloads.
SDN in Windows Server 2012 is primarily undergirded by three
key features: the Hyper-V Extensible Switch to enable virtual
machine mobility, network interface controller (NIC) Teaming to
support high availability of network resources in the enterprise
environment, and network virtualization to ensure network
isolation for virtual workloads. First-hand reports about these
features (for the most part in blogs) have been generally
favorable, particularly for the Hyper-V Extensible Switch.
Hyper-V Extensible Switch
The Hyper-V Extensible Switch provides a software switch
for building virtualized network environments. In contrast to
previous versions of Hyper-V, in which the switch port for
any given virtual machine was part of the virtual network,
the Hyper-V Extensible Switch makes the port a property of
the virtual machine itself, meaning that a virtual machine’s
port does not have to be reconfigured every time the virtual
machine is moved to a different host server.
Moreover, the Hyper-V Extensible Switch can be customized
using extensions. This enables Hyper-V virtual networking to be
deeply integrated into existing network infrastructure, such as
to existing monitoring and security tools. In addition, because
these extensions exist within the Hyper-V environment, they can
automatically take advantage of features such as live migration
and can be managed using Hyper-V Manager, Windows
Management Instrumentation* (WMI*), or Windows PowerShell.
The Hyper-V virtual switch extensions are implemented using
the following drivers:
• Network Driver Interface Specification (NDIS) filter drivers are
used to monitor or modify network packets in Windows.
• Windows Filtering Platform (WFP) callout drivers,
introduced in Windows Vista* and Windows Server 2008,
let independent software vendors (ISVs) create drivers
to filter and modify TCP/IP packets, monitor or authorize
connections, filter IP Security (IPsec)-protected traffic, and
filter remote procedure calls (RPCs).
There are three classes of extensions: capturing, filtering, and
forwarding. All of them can be implemented as NDIS filter
drivers. However, filtering extensions can also be implemented
as WFP filter drivers. Table 2 lists the four types of Hyper-V
virtual switch extensions.
Microsoft Cloud/Virtualized Infrastructure
Extension Purpose Extensible Component
Network Packet Inspection
Inspecting network packets (but not altering them)
NDIS filter driver
Network Packet Filter
Injecting, modifying, and dropping network packets
NDIS filter driver
Network Forwarding
Forwarding not based on Microsoft technology that bypasses
default forwardingNDIS filter driver
Firewall/Intrusion Detection
Filtering and modifying TCP/IP packets, monitoring or authorizing
connections, filtering IPsec-protected traffic, and filtering RPCs
WFP callout driver
Microsoft partners have taken advantage of the standard
Windows API framework for the Hyper-V Extensible Switch.
Chief among Microsoft partners extending the Hyper-V virtual
switch is Cisco with its Cisco Nexus 1000V* virtual switch
extension, which includes integration with System Center 2012
SP1 VMM. From an open-source perspective, NEC provides
support based on System Center 2012 SP1 VMM in its PF1000
extension based on OpenFlow*. Additionally, both 5nine and
InMon have in-market offerings based on Windows Server
2012 Hyper-V switch extensions. Figure 1 shows where partner
hooks plug into the Hyper-V Extensible Switch.
NIC Teaming
NIC Teaming allows multiple network interfaces to work together
as one logical entity, with a single IP address. This can prevent
connectivity loss when there is a network adapter failure. For
example, a server with NIC Teaming can tolerate network adapter
and port failure up to the first switch segment. Additionally,
NIC Teaming can aggregate bandwidth from multiple network
adapters. For example, four 1 GB/second network adapters can
provide an aggregate of 4 GB/second of throughput. Figure 2
provides a graphic overview of NIC Teaming.
NIC Teaming works with any network interface that works with
Windows Server 2012. It is also remotely configurable through
Server Manager and Windows PowerShell and remains the
same process no matter what NIC is used. Indeed, NICs need
not be the same make and model to be teamed.
There are some limitations to NIC Teaming. It works with all
networking capabilities in Windows Server 2012 with two
large exceptions: Single Root I/O Virtualization (SR-IOV) and
Remote Direct Memory Access (RDMA). This is because data
is delivered directly to the network adapter without passing
Table 2. Hyper-V Extensible Switch* extensions
Figure 4. Overview of the Hyper-V Extensible Switch*1
Figure 5. Overview of NIC Teaming1
Microsoft Cloud/Virtualized Infrastructure
through the networking stack. It is therefore not possible for
the teamed network adapters to look at or redirect the data to
another member of the team.
Network Virtualization
Microsoft Claim: Hyper-V in Windows Server 2012 provides
virtual machine isolation through private virtual LANs (VLANs),
an extensible virtual switch, and network virtualization that can
scale beyond 4,094 VLAN IDs.
An early means of achieving some network isolation between
virtualized workloads was to partition a single layer-2 network
into multiple, distinct broadcast domains. Packets on these
partitions are mutually isolated, passing between these
partitions through one or more routers and forming virtual
local area networks (VLANs). Prior to Windows Server 2012,
Microsoft server virtualization provided isolation between virtual
machines, but the network layer of the data center was still not
fully isolated (implying layer-2 connectivity between different
workloads that ran over the same infrastructure). Private VLANs
(PVLANs) are one means of increasing overall virtual machine
isolation by providing isolation between two virtual machines
on the same VLAN. However, the introduction of PVLANs does
not address the hard limit of 4,094 VLAN IDs set out by the
Institute of Electrical and Electronics Engineers (IEEE) 802.1Q
networking standard (which does not provide enough scope
for scaling in today’s multi-tenant cloud environments).
NVGRETo deal with this limitation, Microsoft introduced Network
Virtualization using Generic Routing Encapsulation (NVGRE)
in Windows Server 2012. NVGRE is Microsoft’s proposed
standard for overlaying layer-2 virtual networks over a layer-3
physical network without impacting connectivity and while
maintaining existing virtual machine IP addresses and Media
Access Control (MAC) addresses no matter where the virtual
machines are migrated. NVGRE encapsulates Ethernet layer-2
frames into IP packets marked by a new 24-bit identifier that
enables more than 16 million layer-2 logical networks to operate
within the same administrative domain.
A benefit of NVGRE is that is based on the venerable GRE
protocol, so compatibility is not an issue. However, NVGRE is
not the only overlay network format under consideration by the
Internet Engineering Task Force. VMware has introduced its
own format—Virtual eXtensible Local Area Networks (VXLAN)—
in VMware vSphere 5.1. There is therefore the possibility of a
small-scale standards war erupting around overlay networking
that might lead to future incompatibility.
Moreover, because NVGRE is a new format, the hardware
ecosystem is maturing. For example, because all traffic sent on
the virtual subnet path with Windows Server 2012 is NVGRE
traffic, encapsulated packets for traditional offloads such
as Large Send Offload (LSO) on the send path and Virtual
Machine Queues (VMQ) on the receive path will not provide
their expected performance and scalability benefits because
they operate only on the outer packet header. (The outer
header for NVGRE traffic makes it appear as though all traffic is
generated using the same IP address and destined for a single
IP address.) To address this issue, Windows Server 2012 uses
GRE Task Offload, whereby the network interface can operate
on the inner packet header for standard offloads like LSO and
VMQ to provide their expected performance and scalability
benefits. Mellanox Technologies and Emulex currently support
NVGRE task offload capability in their NICs, as does Intel in
its Intel® Ethernet Switch FM6000 ICs switches.7 Microsoft
is also working with Broadcom to support Hyper-V Network
Virtualization in its chipset.8
SR-IOVSupport for SR-IOV is another powerful networking feature
in Windows Server 2012. SR-IOV enables virtual machines
to perform input/output (I/O) directly to the physical network
adapter, bypassing the root partition. This makes SR-IOV ideal
for high I/O workloads that do not require port policies, quality
of service (QoS), or network virtualization enforced at the end
host virtual switch.
SR-IOV–capable networking devices have hardware surfaces
called virtual functions that can be securely assigned to virtual
Microsoft Cloud/Virtualized Infrastructure
machines, bypassing the virtual switch in the management
operating system for sending and receiving data. The SR-
IOV standard allows PCI Express (PCle) devices to be shared
among multiple virtual machines by providing them with
a direct hardware path for I/O. However, not everything is
bypassed: policy and control remains under the management
operating system. Figure 3 provides an overview of SR-IOV.
Because SR-IOV has the virtual machine communicate directly
with the physical hardware, I/O workloads using the SR-IOV
mode cannot take advantage of many aspects of network
virtualization. Moreover, current IPsec Task Offload (IPsecTO)
network interfaces on the market do not support IPsecTO on
the SR-IOV path. This means that if any of the new policy-
based features such as access control lists (ACLs) and QoS in
the Windows Server 2012 Hyper-V Switch are set on a virtual
network interface, the virtual network interface traffic will not
take the SR-IOV path and instead will go through the Hyper-V
Switch path for policy enforcement. This will negate any of the
improvements to CPU utilization—particularly for tasks like live
migration—that SR-IOV can provide.
Live migration via SR-IOV can substantially speed up the
movement of virtual machines in a production environment.
Microsoft rightly touts the capabilities of Hyper-V in doing
this. However, it is not the only solution capable of doing so.
Oracle reports that Oracle VM Server for SPARC 3.1* supports
live migration using SR-IOV (and demonstrated this capability
as early as Oracle World 2012). Surprisingly, though VMware
introduced support for SR-IOV in VMware vSphere 5.1, VMware
vMotion* specifically does not work with it. This is unusual
because a team from VMware filed a patent for live migration
over SR-IOV in 2010.9 Nevertheless, this current limitation
of VMware vSphere appears to be a result of a fundamental
incompatibility between VMware vMotion and the assignment
of Peripheral Component Interconnect (PCI) devices to virtual
machines (SR-IOV being a PCI standard). It is unclear at this
time when VMware will address this limitation.
VMware NSXWith the beta release of VMware NSX*, the Microsoft-VMware
rivalry also extends to network virtualization. VMware NSX
is a virtual networking and security software product family
created from VMware's vCloud Networking and Security*
(vCNS) and Nicira Network Virtualization Platform (NVP)
intellectual property. Like the SDN in Windows Server 2012,
NSX virtualizes both Layer 2 and Layer 3 network services
and provides a Layer-2 gateway to connect to physical
workloads and legacy VLANs. However, VMware takes
network-virtualization-in-a-box further and also virtualizes
Layers 4 through 7. VMware’s goal is to let IT administrators
(server, virtual machine, or networking) set up and provision
virtual networks in seconds rather than the days or weeks that
physical networking can take and to do so without command-
line interfaces or other direct administrator intervention.
VMware claims that NSX works as an overlay above any physical
network hardware and works with any server hypervisor
platform. Thus NSX can be deployed in multi-hypervisor
environments that run Hyper-V, XenServer, Kernel-based Virtual
Machine (KVM), or VMware ESXi* and in environments that
use any of a variety of cloud-management solutions, such as
VMware vCloud Automation Center*, OpenStack*, and Apache
CloudStack*. NSX uses a software agent to replace the virtual
switch in a hypervisor. The virtual switch operates in the kernel
of the VMware ESX* hypervisor and Linux KVM hypervisor.
Figure 6. Overview of SR-IOV1
Microsoft Cloud/Virtualized Infrastructure
Moreover, because the Hyper-V virtual switch is extensible, NSX
should also operate in kernel mode on Hyper-V, although it is not
clear if this is currently the case.
Because NSX is an SDN platform and not an SDN controller,
the NSX-supported hardware ecosystem will be an important
factor in customer uptake. Theoretically, NSX will work
with any SDN controller, though NSX explicitly lists support
for Arista*, Brocade*, Cumulus*, Dell*, HP*, and Juniper*
hardware. Conspicuously absent from this list is Cisco. Cisco
announced earlier this year its own SDN solution with the
Cisco Open Network Environment* (Cisco ONE). The major
fault line in SDN might thus form between VMware and Cisco
and Microsoft appears to be partnering closely with Cisco in
this, particularly with Hyper-V Extensible Switch support for
the Cisco Nexus 1000V.
NSX will not be in general release until the fourth quarter of
2013 but VMware is already touting customers like Citi and
eBay using NSX in production.
StorageMicrosoft Claim: With Server Message Block (SMB) file-
based storage, virtual storage pooling (storage spaces), and
automatic storage tiering, Windows Server 2012 R2 delivers
high performance storage and availability coupled with efficient
capacity utilization using industry-standard hardware.
The principal storage features of the Windows Server 2012
storage stack are Storage Spaces, enhancements to the SMB
protocol, and Offloaded Data Transfers (ODX).
Storage Spaces
Storage Spaces pool multiple physical hard disks to create
storage pools and virtual hard disks configured with specific
provisioning and allocation attributes. These aggregated
physical disk units can take advantage of failover clustering for
high availability and resiliency with commodity disks.
Windows Server 2012 R2 enhances this capability with tiered
Storage Spaces. These allow a mix of solid-state drives (SSD)
and hard disk drives (HDD) in a single Storage Space. Data can
either be moved automatically between the SSD and HDD tiers
depending on the frequency with which it is accessed or can
be specifically stored in the SSD tier for fast access.
SMB 3.0
Microsoft introduced SMB 3.0 in Windows Server 2012, which
enables server applications like Hyper-V and SQL Server to
store their data files on a common Windows file share. SMB 3.0
also supports SMB over RDMA-enabled hardware to enable
high-performance storage capabilities on lower-cost SMB file
shares (SMB Direct). SMB Direct is automatically configured
by Windows Server 2012 but the network adapters must
have RDMA capability. Currently, these network adapters are
available in three different types: Internet Wide Area RDMA
Protocol (iWARP), InfiniBand*, or RDMA over Converged
Ethernet (RoCE). See the Assessment for IT section below for
performance figures for SMB Direct.
SMB 3.0 also supports SMB Multichannel, which supports
multiple paths between SMB clients and file servers to facilitate
network bandwidth aggregation and fault tolerance. Moreover,
SMB Multichannel can detect whether a network adapter
has the RDMA capability and then create multiple RDMA
connections for that single session (two per interface). This
allows SMB to use the high-throughput, low-latency, and low-
CPU utilization offered by RDMA-capable network adapters.
It also offers fault tolerance for administrators using multiple
RDMA interfaces. Because SMB Multichannel detects network
adapter capabilities and determines whether a network adapter
is RDMA-capable, SMB Direct cannot be used by the client if
SMB Multichannel is disabled.
ODX
Windows Server 2012 also supports ODX. ODX offloads data
movement from the server, enabling fast and easy migration
of large data sets or entire virtual machines within or between
storage arrays without going through servers. In non-ODX file
Microsoft Cloud/Virtualized Infrastructure
transfers, the data is read from the source storage array and is
transferred across the network from one server to another and
then to the destination storage array. With ODX file transfers,
data is copied directly from the source to the destination,
freeing up server or network load and speeding up copy times.
Assessment for IT
One indication of the performance and reliability of Storage
Spaces is that the Microsoft Windows release team itself uses
the feature. It successfully handles 720 PB of data per week
on 20 file servers, with 10 Gigabit Ethernet (GbE) connections
and twenty 60-bay JBOD (just a bunch of disks) enclosures of
3 TB, 7,200 RPM hard drives (with plans to double the storage
capacity over the coming year). Performance increased with
the move to Storage Spaces even as the team reduced its
number of file servers from 120 to 20. Moreover, storage now
costs the team $450 per TB as opposed to $1,350 per TB.
Storage Spaces can grow on demand and use a wide variety
of storage types in the same pool: Serial Advance Technology
Attachment (SATA), Serial Attached Small Computer System
Interface (SAS), Universal Serial Bus (USB), or Small Computer
System Interface (SCSI). Moreover, drives can be specified
as hot spares, and automatic repair for pools containing hot
spares is possible if there is sufficient storage capacity to cover
what was lost.
Failover clusters can use Storage Spaces. However, failover
clustering only supports Storage Spaces using SAS as a
storage medium. Virtual disks to be used in a failover cluster
and that emanate from a storage pool must use the New
Technology File System (NTFS).
Storage Spaces also have several notable limitations. They are
not supported on boot, system, or Cluster Shared Volumes
(CSVs), each drive must be 10 GB or larger, only un-formatted
or un-partitioned drives should be added to a storage pool
(the contents of drives being added to a pool will be lost), all
drives in a pool must use the same sector size, and finally, Fibre
Channel and Internet SCSI (iSCSI) are not supported.
Storage does not need to be local to harness the benefits of
Windows Server 2012. Independent testing by ESG in January
2013 reports that direct-attached storage (DAS) accessed by
SMB Direct clocked I/O operations per second (IOPS) at 94–98
percent of those of local storage pools with the same workload.10
Taken all together, the capabilities of SMB and the Storage
Space capabilities of Windows Server 2012 begin to resemble
those of traditional SANs.
Fibre Channel/iSCSI Array Windows Server* File Cluster
Storage Tiering Storage Tiering (new with Windows Server 2012 R2)
Data Deduplication Data Deduplication (Windows Server 2012 enhanced with R2)
Redundant Array of Independent Disks (RAID) Resiliency Groups
Flexible Resiliency Options (enhanced with
Windows Server 2012 R2) 64
Pooling of Disks Pooling of Disks
High Availability Continuous Availability
Copy Offload SMB Copy Offload
Snapshots Snapshots
Maximum VMs 8,000
However good Storage Spaces might be, it is still at root a
software-based fault tolerance. Many enterprises will doubtlessly
still go with the hardware-based fault tolerance of a RAID array
in a SAN. Organizations can also choose to use a SAN to take
advantage of ODX with Windows Server 2012. This can provide
for the migration of virtual machines from one place to another
within the array for exceptional transfer speeds.
Comparison with VMware Virtual SAN*
It is tempting to view the beta release of VMware* Virtual
SAN* (VSAN) as an extension of the rivalry between Microsoft
and VMware to storage virtualization. And there are indeed
similarities between VSAN and the storage-virtualization
features in Windows Server 2012. VSAN simplifies pooling of
direct-attached storage (DAS) and can provide software-based
fault tolerance in storage, particularly for virtual machines. But
Table 3. Functionality comparison between iSCSI storage arrays and Windows Server* file clusters
Microsoft Cloud/Virtualized Infrastructure
there are also some significant differences between the two
storage-virtualization offerings.
Feature-wise, VSAN does not have deduplication capabilities,
which could be a significant omission for enterprise customers.
For example, January 2013 testing of deduplication in Windows
Server 2012 yielded 28 percent storage capacity savings for
a file server workload and 98 percent savings for a library of
similar virtual machines (with only a 3 percent increase in time
to open deduplicated files).11 And third-party vendors of hyper-
converged storage and compute solutions like Nutanix and
SimpliVity also offer deduplication.
Moreover, VSAN does not provide tiered storage in the same
way that Windows Server 2012 does. While VSAN does support
mixing SSDs and HDDs for storage (in fact, VSAN requires that
each VMware vSphere host participating in a VSAN cluster
have at least one SSD and one HDD), VSAN uses the attached
SSDs as an automated buffer to cache cluster reads and
writes. In contrast to Windows Server 2012, administrators
cannot store specific files of their choosing in the SSD tier in a
VSAN cluster. VSAN ultimately provides organizations that use
VMware technologies with a simpler means of pooling storage
for virtual machines but it does not go far beyond that. This
feature is limited to creating a virtual, RAID-like storage cluster for
virtual workloads only, and it cannot yet extend the benefits of
virtualized storage to other workloads.
Live Migration in Windows Server 2012*Microsoft Claim: Hyper-V in Windows Server 2012 supports
unlimited simultaneous live migrations over both 1 GbE and
10 GbE networks, including live storage migration and shared-
nothing live migration.
The original version of Hyper-V only offered quick migration,
which pauses the virtual machine briefly during the switchover
from host to host. It is still available for times when uptime is
not critical—and in many cases a quick migration can be faster
than a live migration. Hyper-V in Windows Server 2008 R2
added live migration but only allowed a single live migration
between two hosts in a cluster. As the maximum number of
hosts in a Windows Server 2008 R2 cluster is 16, this allowed
for a total of eight simultaneous live migrations. While it would
serialize multiple live migrations from one host, doing one after
the other, this can be a limitation in large environments.
Live Migration without Shared Storage: “Shared-Nothing”
Live Migration
Live migration without shared storage (also known as “shared-
nothing” live migration) is new in Windows Server 2012. It
enables users to migrate their virtual machines and associated
storage between servers running Hyper-V within the same
domain. This kind of live migration requires only an Ethernet
connection. However, this type of live migration does not
provide high availability (live migration with failover clusters
offers this benefit with its shared storage). Figure 4 provides an
overview of live migration without shared storage.
Live migration without shared storage can increase flexibility for
virtual machine placement and reduce downtime for migrations
across cluster boundaries. An administrator could use shared-
nothing live migration to quickly move virtual workloads off
of a host server that needed repairing and then move them
back once the problem was solved. In addition, administrators
can use this type of live migration to move virtual machines
between clusters and from a non-clustered server to a failover
cluster. Administrators can also use it to migrate virtual
machines between different storage types.
During the operation of live migration without shared storage,
your virtual machine continues to run while all of its storage
is mirrored across to the destination server running Hyper-V.
Figure 7. Overview of live migration without shared storage1
Microsoft Cloud/Virtualized Infrastructure
After the Hyper-V storage is synchronized, the live migration
completes its remaining tasks. Finally, the mirroring stops and
the storage on the source server is deleted. Hyper-V performs
the migration as detailed in Figure 5.
1. Throughout most of the migration operation, disk reads
and writes go to the source virtual hard disk.
2. While reads and writes occur on the source virtual hard
disk, the disk contents are copied over the network to the
new destination virtual hard disk.
3. After the initial disk copy is complete, disk writes are
mirrored to both the source and destination virtual hard
disks, while outstanding disk changes are replicated.
4. After the source and destination virtual hard disks are
synchronized, the virtual machine live migration is initiated,
following the same process that was used for live migration
with shared storage.
5. After the live migration is complete and the virtual machine
is successfully running on the destination server, the files
on the source server are deleted.
After the virtual machine’s storage is migrated, the virtual machine
migrates while it continues to run and provide network services.
Because shared-nothing live migrations have to copy the entire
virtual hard disk (in addition to disks writes) in the course of the
operation, this type of live migration can be exceptionally taxing
on network connections.
Windows Server 2012 allows as many simultaneous live
migrations as an organization wants. This is something of a
mixed blessing, however: particularly with the large maximum
configurations permitted with Windows Server 2012, it is
possible that moving a few, large, tier-1 application servers could
tax even a 10 gigabit per second (Gbps) network connection.
Assessment of Competitive Position
Hyper-V 2012 also introduces many new features that make
it more attractive to small and midsized companies where
cost is a significant driver. The new capabilities in the SMB
3.0 protocol allow non–storage specialists to stand up a
high availability Hyper-V cluster using low-cost servers and
commodity SAS disk drives. In the past, companies would
have been required to purchase an expensive storage system
to get the same level of performance.
Implications for IT
As might be expected, the requirements to get live migration
to work can be complex. Generally any form of live migration
requires the following:
• Two (or more) servers running Hyper-V that:
* Support hardware virtualization
* Are using processors from the same manufacturer (for
example, all AMD or all Intel)
* Belong to either the same Active Directory Domain
Services* (AD DS) domain or to domains that trust
each other
• Virtual machines must be configured to use virtual hard
disks or virtual Fibre Channel disks (no physical disks).
• Use of a private network is recommended for live migration
network traffic.
Figure 8. Steps of live migration without shared storage1
Microsoft Cloud/Virtualized Infrastructure
For live migration in a cluster:
• Windows Failover Clustering is enabled and configured.
• CSV storage in the cluster is enabled.
Physical disks that are directly attached to a virtual machine
(pass-through disks) are supported when all of the following
conditions are met:
• The virtual machine configured with one or more physical
disks is running on a Hyper-V failover cluster.
• The virtual machine configuration file is hosted on a CSV.
• The physical disks are configured as a storage disk
resource under control of the failover cluster and are
properly configured as a dependent resource for the
virtual machine.
Requirements for live migration using shared storage:
• All files that comprise a virtual machine (for example, virtual
hard disks, snapshots, and configuration files) are stored
on an SMB share.
• Permissions on the SMB share have been configured
to grant access to the computer accounts of all servers
running Hyper-V.
No extra requirements exist for live migration with no shared
infrastructure (the shared-nothing live migration). However,
physical disks that are directly attached to a virtual machine
(pass-through disks) are not supported in live migration without
shared storage (shared-nothing live migration).
Live Migration Improvements in Windows Server 2012 R2
Microsoft Claim: Windows Server 2012 R2 improves live
migration transfer speeds for most workloads by roughly two
times with live migration compress; it supports transfer speeds
of up to 56 GB/s by offloading the transfer to hardware and
harnessing the power of RDMA technologies.
Compression during live migration is the default in Windows
Server 2012 R2. Compression uses the host server CPU to
reduce the data sent over the network and can be a good
approach in IT environments that have bandwidth limitations
(and many environments have more spare compute capacity
than unused network bandwidth). Compression can reduce live
migration times by around 80 percent.
SMB Direct (SMB over RDMA), on the other hand, is the
best option to use when there are plenty of available network
resources but limited CPU availability (as it bypasses the
processor entirely). Moreover, SMB Direct can give the greatest
live migration performance: Microsoft reports benchmarking
live migrations at 22 seconds using RDMA versus 38 seconds
with memory compression. Microsoft's guideline is to use
compression with bandwidths of 10 gigabits or less, but use
SMB or SMB Direct with bandwidths of more than 10 gigabits.
The Microsoft CloudIn defining its private cloud solution, Microsoft essentially cribs
off of the National Institute of Standards and Technology’s
Special Publication 800-145, which defines the essential
characteristics of a cloud to include resource pooling, self-
service, elasticity, and measurement of service.12 Windows
Server 2012 and Hyper-V are the foundations for abstracting
and pooling key hardware resources (compute, storage, and
networking) into units that enable dynamic provisioning and
scaling in the Microsoft cloud. Microsoft in turn relies upon
System Center 2012 SP1 for delivering the other characteristics
of private clouds.
Microsoft Cloud/Virtualized Infrastructure
Microsoft Private Cloud CapabilitiesManagement for the Microsoft private-cloud solution
is provided by the System Center 2012 SP1 suite of
applications, specifically:
• System Center 2012 Virtual Machine Manager (VMM),
which provides the fundamental services for creating
and managing clouds, specifically in deploying and
updating virtual machines and applications. VMM analyzes
performance data and resource requirements for both
workloads and hosts, enabling administrators to fine-
tune placement algorithms to receive optimal deployment
recommendations from the software.
• System Center 2012 App Controller, which is the
self-service portal through which users make requests
directly to a private cloud created with VMM. Data center
administrators can delegate control of applications and
virtual machines to application owners through the web-
based, self-service interface.
• System Center 2012 Service Manager, which provides
automated IT service management and an associated
self-service portal. It can provide IT departments with
processes for incident and problem resolution, change
control, and asset life-cycle management both for physical
and virtual servers.
• System Center 2012 Orchestrator, which provides a way
to automate interactions among other management tools
such as VMM and Service Manager. It integrates both
with Microsoft products and with other products, allowing
administrators to connect different systems without any
knowledge of scripting or programming languages.
• System Center 2012 Operations Manager, which can
monitor virtual machines, applications, and other aspects
of a private cloud, and then initiate actions to fix problems it
detects. Management packs are available for most current
Microsoft server applications and operating systems, in
addition to many from third parties to enable Operations
Manager to monitor specific applications.
Other pieces of the System Center suite, such as System
Center 2012 Configuration Manager, System Center 2012
Data Protection Manager, and System Center 2012 Endpoint
Protection, play an auxiliary role in cloud management
supporting the hardware and storage that underpin the virtual
fabric of the private cloud.
LicensingPrevious versions of products within the System Center family
could be licensed either separately or together. That changed
with System Center 2012, which is only available bundled in
either the Datacenter or Standard editions. Both SKUs are
licensed by the number of physical processors present on
the managed server and differ only in virtualization rights:
System Center 2012 Standard permits the management of
two operating system environments (OSEs), physical or virtual
servers, per license; System Center 2012 Datacenter has no
limit. Thus a four-processor server running no virtual machines
would require two licenses of either Standard or Datacenter,
whereas a two-processor server running three virtual machines
would require two licenses of Standard or a single Datacenter
license, and a four-processor server running six virtual
machines would require four Standard licenses but only two
Datacenter licenses.
An additional change to licensing in System Center 2012 is in
licensing the Microsoft SQL Server* technology used by all
System Center products. Licensing terms for System Center
2012 products allow customers to run one instance of SQL
Server in one physical or virtual OSE to support (and only to
support) the System Center application. This instance of SQL
Server cannot be used to run other workloads. Microsoft
does not require SQL Server client access licenses for this
support use.
System Center 2012 Standard and System Center 2012
Datacenter licenses cost $1,323 and $3,607, respectively, for
open, no-level licenses (that is, without volume discounts) and
two years of software assurance. However, as analyzed in the
Market Position versus VMware section of this paper (see page
Microsoft Cloud/Virtualized Infrastructure
4), Microsoft’s ECI license, which bundles in both Windows
Server and System Center licenses for a list price of $5,056
per processor, will generally make the most sense for private
cloud-infrastructure deployments.
Support for Heterogeneous Virtualized EnvironmentsMicrosoft Claim: The Microsoft private cloud is designed to
let companies comprehensively support your heterogeneous
IT environment, taking advantage of your investments and
maximizing your development skill sets. Organizations can keep
what they have and make the move to a new kind of agility.
Technically speaking, Microsoft cloud-management tools
do work across Hyper-V, VMware ESX and ESXi, and Citrix
XenServer hypervisors, even in environments that combine two
or more of these virtualization platforms. Moreover, Hyper-V
supports a range of guest operating systems (that is, those
running in virtual machines), including several versions of
Windows, Windows Server, and Linux.
On the Citrix side of heterogeneous hypervisor support, VMM
2012 supports Citrix XenServer v6.2.0.
Implications for IT—Managing VMware HypervisorsThe situation managing VMware with VMM is more nuanced
than managing Citrix. Support for ESX 5.0 and ESXi 5.0
hypervisors did not arrive in System Center 2012 VMM until
Service Pack 1. There is much that VMM can manage on
VMware. The VMM command shell is common across all
hypervisors. VMM offers virtual machine placement based on
host ratings during the creation, deployment, and migration
of VMware virtual machines. VMM supports live migration of
VMware virtual machines using vMotion and Storage vMotion
(in addition to transferring virtual machines directly using VMM).
VMM supports and recognizes VMware Paravirtual SCSI
(PVSCSI) storage adapters—when administrators use VMM
to create a new virtual machine on an ESX host, they can add
a SCSI adapter of type “VMware Paravirtual.” Administrators
can make ESX host resources available to a private cloud by
creating private clouds from host groups where ESX hosts
reside or by creating a private cloud from a VMware resource
pool. This extends to configuring quotas for the private cloud
and for self-service user roles that apply to the private cloud.
Even management functions on ESX host servers supported
by VMM can be heavily qualified, however. While ESX host
resources can be made available in a private cloud, VMM
does not integrate with VMware vCloud*. VMM supports the
organization and storage of VMware virtual machines (.vmdk files)
and VMware templates in the VMM library, but does not support
some older types VMDK files, which can only be converted
to supported file types by VMware conversion tools such as
VMware Virtual Disk Manager. VMM supports the hot-add and
hot-removal of virtual hard disks on VMware virtual machines,
new VMM storage automation features are not supported for
ESX hosts. All storage must be added to ESX hosts outside
VMM. Finally, VMM uses HTTPS for all file transfers between
ESX hosts and the VMM library; VMM no longer supports
Secure File Transfer Protocol (SFTP) for file transfers, and the use
of a virtual machine delegate is not supported.
There are also a number of VMware management tasks that
simply cannot be performed from VMM, notably those related
to networking and services. VMM supports both standard
and distributed virtual switches (vSwitches) and port groups.
VMM recognizes and uses existing configured vSwitches and
port groups for virtual machine deployment, including those
configured using VMware vCenter Server. Moreover, new
VMM networking management features—such as assigning
logical networks and assigning static IP addresses and MAC
addresses to virtual machines based on Windows—are
supported on ESX hosts. However, VMM does not support
vSwitch and port group configuration on VMware ESX hosts.
Administrators must use VMware vCenter Server to configure
port groups with the necessary VLANs that correspond to the
logical network sites.
Microsoft Cloud/Virtualized Infrastructure
The situation is similar with services, sets of virtual machines
that are configured and deployed together and managed as a
single entity (for example, a deployment of a multi-tier line-of-
business application). Administrators can deploy VMM services
to ESX hosts and, because VMM services use a different
model than VMware vApp*, the two methods can coexist.
However, administrators cannot use VMM to deploy vApps and
must use the VMware vCenter Operations Manager*.
A 2012 study by the Edison Group found that using VMM to
manage a VMware vSphere environment was 56 percent less
efficient (in terms of time spent to perform a given task) and
69 percent more complex (in terms of system-affecting steps
it takes to complete a given task) than using VMware vCenter
to manage a comparable set of VMware virtual machines.
Edison estimated the expense of this inefficiency at more than
$32,552 per year in a 1,000-virtual machine data center.13 This
is certainly plausible: Microsoft itself goes so far as to say
on TechNet that it expects administrators to perform more
advanced fabric management (such as the configuration
of port groups and VMware vMotion and Storage vMotion)
through VMware vCenter Server.14 However, it should be noted
that VMware commissioned this study and it is not difficult
to imagine an element of sour grapes to it: some of the need
for heterogeneous-hypervisor management tools has come
from Hyper-V making inroads into VMware shops. For its part,
VMware only started experimenting with the XVP Manager*
plug-in for VMware vCenter in 2011; VMware’s acquisition
of Dynamic Ops in 2012 helped the company in providing
management tools for heterogeneous cloud environments,
particularly AWS. However, VMware’s first formal product to
tackle multi-hypervisor management, VMware vCenter Multi-
Hypervisor Manager, only came out in 2013.
The June 2013 Gartner Magic Quadrant study on x86
server virtualization still put VMware ahead of Microsoft in
the leaders’ quadrant; while the consultancy did not list lack
of heterogeneous-server management tools as a specific
weakness for VMware, Gartner estimated in the same report
that 60 to 70 percent of Windows workloads are already
virtualized as opposed to 35 to 45 percent of Linux ones.15
Much future growth for either VMware or Microsoft will clearly
come from solving business pain in virtualizing and managing
those workloads (both virtual and physical).
Microsoft’s current limitations in managing other hypervisors
are real enough, but it continues to make progress in this area.
Microsoft has also made strides in managing guest operating
systems. Table 4 outlines Microsoft product compatibilities in
this realm.
Microsoft Cloud/Virtualized Infrastructure
ProductLinux* UNIX*
Red Hat SUSE* CentOS* Ubuntu* Debian* Oracle IBM AIX* HP-UX* Solaris*
Microsoft System Center 2012* Operations Manager
Y Y Y Y Y Y Y Y Y
System Center 2012 Manager Configuration
Y Y Y Y Y Y Y Y Y
System Center 2012 Endpoint Protection
Y Y Y Y Y Y
No Plans
System Center 2012 Virtual MachineManager
Y Y Y Y Future Future
Hyper-V* Y Y Y Y Future Future
Windows Azure* Infrastructure as a Service (IaaS)
Future Y Y Y Future Future
For information about the specific versions supported by
Microsoft, see http://technet.microsoft.com/library/
hh831531.aspx.
New in Windows Server 2012 R2, Hyper-V now offers full
dynamic memory support for Linux guests including:
• Minimum memory setting—being able to set a minimum
value for the memory assigned to a virtual machine that is
lower than the startup memory setting
• Hyper-V smart paging—paging that is used to enable a
virtual machine to reboot while the Hyper-V host is under
extreme memory pressure
• Memory ballooning—the technique used to reclaim unused
memory from a virtual machine to be given to another
virtual machine that has memory needs
Implications for IT—Heterogeneous Guest Operating System ManagementUntil now, if IT organizations wanted to take advantage of
Linux Integration Services (LIS) for their Hyper-V environments,
they had to go to the Microsoft download center, download
the correct LIS package for their Linux distribution, and then
manually install it on their Hyper-V servers. New for Windows
Server 2012 R2 Hyper-V hosts, key Linux vendors (such as
Red Hat Enterprise Linux*, SUSE*, CentOS*, and Ubuntu*) are
going to include LIS for Hyper-V in their standard distributions,
so there is no manual step involved any longer in order for you
to take advantage of the latest LIS capabilities.
In terms of management with System Center, the management
agent must be installed in the guest OS. Microsoft's agent
for Linux or UNIX* computers used to be based on the
OpenPegasus* protocol, but is now based on the Open
Management Infrastructure (OMI) stack. OMI is an open-
source CIM that was contributed to the Open Group by
Microsoft in 2012. OMI is the successor to the venerable
Windows Management Infrastructure (WMI). However, instead
of using the Distributed Component Object Model (DCOM)
for remote management (as WMI did in the absence of
standard protocol when it was first released with Windows
NT 4.0), Microsoft has aligned WMI to the current Distributed
Management Task Force (DMTF) standards to produce
OMI. OMI supports both the DMTF CIM and Web Services-
Management (WS-Management) standards. Microsoft Lead
Architect Jeffrey Snover described the motivation behind OMI
as the creation of an abstraction layer for data centers akin to
Table 4. Guest operating system compatibility with Microsoft products
Microsoft Cloud/Virtualized Infrastructure
the hardware abstraction layer (HAL) that came out in Windows
NT.16 However, to date only Cisco (in its Cisco Nexus 6000
Series of switches) and Arista (across all its platforms) provide
support for OMI.
Public and Hybrid Clouds: Windows Azure*Windows Azure is Microsoft’s multi-tenant cloud-computing
platform. It provides both platform as a service (PaaS) and
infrastructure as a service (IaaS). Windows Azure Virtual
Machines comprise the IaaS offering. These virtual machines
can be based on Windows Server 2012 or built on Ubuntu,
CentOS, SUSE Linux Enterprise, or openSUSE* Linux
distributions. Customers can upload custom virtual machine
images and data disks in .vhd files to Windows Azure or
choose from pre-built Windows or open-source virtual
machine images in the Windows Azure Image Gallery. All
virtual machines in Windows Azure are based on the .vhd
format, which simplifies migrating virtual workloads to and from
Windows Azure (VMware workloads must first be converted
.vhd before they can be migrated to Windows Azure).
The Windows Azure PaaS offering is composed of Windows
Azure Web Sites, Windows Azure Cloud Services (which
provides containers for hosted applications, both public web
apps and private line-of-business apps), and Windows Azure
Mobile Services (which provides cloud-based back-end
support for Windows, Google Android*, and Apple iOS* mobile
apps). Windows Azure Web Sites can be built in ASP.NET*,
PHP*, Node.js*, Python*, or Classic ASP* using either SQL
Database or MySQL* for the database and support for Git*,
GitHub*, Bitbucket*, CodePlex*, TFS*, and Dropbox* for source
control. Microsoft has started software development kits
(SDKs) for Windows Azure Cloud Services in Python, Java*,
Node.JS, and .NET. For Windows Azure Mobile Services,
Microsoft provides SDKs for Windows, Android, iOS, and
HTML, in addition to a representational state transfer (REST)
API; Windows Azure Mobile Services also supports sending
push notifications to Android and iOS devices.
Microsoft Claim: A business with a Microsoft private cloud
can seamlessly burst into Windows Azure and manage their
private and hybrid environments from a single plane of glass.
System Center does provide management tools to span hybrid
configurations of connected public and private clouds. Beyond
VMM, System Center 2012 SP1 App Controller is a lynchpin in
the Microsoft paradigm for spanning private and public clouds.
App Controller provides a web-based, self-service portal
to configure, deploy, and maintain virtual machines and
services across private and public clouds. The self-service
paradigm is particularly important in the context of cloud
management (especially in hybrid deployments). With
administrative overhead inherent in a cloud environment, IT
can easily become a bottleneck of managing who can do
what, where, when, and how much for each individual hybrid
cloud deployment and among on-premises and off-premises
facilities. Delegating authority to business and technical roles
responsible for an application—including solution architects,
release owners, application developers, and technical support
staff—to enable them to self-manage and maintain their own
deployments themselves is an effective and efficient way to
manage a hybrid cloud environment. In System Center 2012
SP1, both VMM and App Controller have the same self-service
model and delegation of authority built in with “user role”
profiles and “run as” accounts.
Beyond providing self-service management, App Controller
can act as the sinew linking private and public clouds.
Certificates establish trust between the Windows Azure
management API and App Controller. This authentication
allows App Controller to call on the Windows Azure API when
administrators perform tasks such as deploying services or
changing configuration properties. The service certificate, or
Personal Information Exchange certificate (.pfx file), contains
the private key. App Controller stores this certificate in the App
Controller database. The management certificate (.cer file),
which contains only the public key, is kept in Windows Azure
Microsoft Cloud/Virtualized Infrastructure
for accessing the Windows Azure management API. Windows
Azure allows customers to create their own management
certificates, either self-signed certificates or using their
preferred certification authority (CA).
The interaction between App Controller and the public cloud is
similar in the case of connecting to a hosting provider instead
of Windows Azure. Rather than interface with the Windows
Azure management API, App Controller uses certificates to
establish trust with Service Provider Foundation (a component
of System Center 2012 SP1 Orchestrator). The private key is
similarly stored locally and the public key given to the hosting
provider. The only major difference between the two processes
is that administrators must provide the Service Provider
Foundation OData protocol URI for the VMM service.
Once connected to the cloud (or clouds), App Controller
provides not just a single pane of glass, but the only means
of managing many operations on the cloud. Take storing and
copying a virtual machine to the cloud as an example: storing
a virtual machine can be initiated from either App Controller or
the VMM admin console. The process essentially exports the
virtual machine instance from its current location to a network
share specified in the Stored VM path of the associated
cloud library properties. The store process encapsulates the
current state of the virtual machine and makes it portable and
redeployable. However, while one can copy a virtual machine
to Windows Azure with App Controller, it is not possible to do
so from VMM.
VMware is a competitor for Microsoft in this arena as well.
VMware recently unveiled VMware vCloud Hybrid Service*, an
IaaS service built on VMware. It provides many of the same
functions as Windows Azure, with customers using VMware
vCloud Connector* to connect, view, copy, and operate
VMware vSphere virtualized applications across clouds based
on VMware vSphere. (Note: service is currently restricted to the
United States, with global coverage rolling out in 2014.)
Neither Microsoft nor VMware can yet match the market share
for public cloud services of AWS. AWS is the name of the
collection of web services provided by Amazon, of which the
portion most analogous to Windows Azure or VMware vCloud
Hybrid Service being Amazon Elastic Compute Cloud* (EC2*).
Launched in 2006, EC2 has had much more time to solidify
its place in the cloud market than either Microsoft or VMware.
However, Amazon does not provide its own virtualization (it
uses virtual machines based on Xen* for the service). This
might be more of an advantage for VMware than Microsoft,
however: third-party tools like HotLink Hybrid Express* provide
a VMware vCenter plug-in for managing Amazon and other
public cloud workloads, and VMware’s 2012 acquisition of
Dynamic Ops provides VMware with its own tool for extending
management into AWS. AWS does have a management
pack for System Center Operations Manager (similar to the
Operations Manager management pack for clouds based
on Microsoft products) to monitor the health of applications
running on AWS. However, this is a monitoring tool and it
does not enable administrators to copy or operate workloads
running on AWS.
Cloud AttributesThe Microsoft virtualization stack and its tools for managing
clouds form the raw building blocks for Microsoft’s public
cloud offering, Windows Azure. But it is Microsoft’s processes
that determine whether or not Microsoft’s public cloud is more
or less than the sum of its parts. A key attribute for a multi-
tenant cloud is security; on this count, Microsoft appears to be
meeting or exceeding expectations.
Microsoft Cloud/Virtualized Infrastructure
Public Cloud SecurityMicrosoft summarizes its commitment to security in Windows
Azure as “excellence in cutting-edge security practices.”17
And publicly available details surrounding security specifics
in Windows Azure exclusively surround Microsoft’s security
practices as opposed to particular technologies. Layers of
Microsoft’s security include:
• Filtering routers to reject attempts to communicate
between addresses and ports not configured as allowed
• Firewalls to restrict data communication to and from
known and authorized ports, protocols, and destination
and source IP addresses
• 128-bit transport-layer security for control messages
between and within the Windows Azure data center with
the option for customers to encrypt traffic between end
users and virtual machines
• Integrated security patch management and deployment
systems to manage the distribution and installation of
security patches for Microsoft software
• Centralized monitoring, correlation, and analysis of
information generated by devices within the environment
• Network segmentation, particularly back-end networks
composed of partitioned local area networks for web
and applications servers, data storage, and
centralized administration
Physical security, personnel security, and auditing also,
unsurprisingly, form layers of the Windows Azure security stack.
Microsoft’s claims that it is thoroughly implementing these
important (if mundane) aspects of security is attested by
its ISO 27001:2005 certification. This certification formally
specifies a system by which information security is brought
under explicit management control and is published by the
International Organization for Standardization (ISO). Microsoft’s
current certificate is issued by the British Standards Institution
(BSI).18 Beyond mapping its ISO 27001 certification to the 100
elements of Cloud Security Alliance’s (CSA) Cloud Controls
Matrix (CCM), Microsoft took the further step of having public
auditor Deloitte & Touche LLP assess Windows Azure for a
Service Organization Control (SOC) 2 Type 2 report in security,
availability, and confidentiality trust principles in 2013. Microsoft
claims that Windows Azure is the first cloud services provider
to receive such attestation by an independent, registered
public accounting firm.19
Separately, Microsoft Global Foundation Services (GFS), which
manages and operates the physical infrastructure on which
Windows Azure runs (except for the Windows Azure Content
Delivery Network), is ISO 27001:2005 certified. In addition,
Windows Azure and the GFS infrastructure undergo annual
Statement on Standards for Attestation Engagements (SSAE)
16 and International Standards for Assurance Engagements
(ISAE) 3402 audits by a registered public accounting firm.
What about other aspects of security and quality of service,
such as hardware specifications by customers? This is
where Microsoft’s transparency about Windows Azure ends.
Microsoft confirmed that customer choices for virtual machines
on Windows Azure are limited to their size (number and clock
speed of virtual CPUs and the amount of memory), number,
and operating system (Windows or Linux).20 Windows Azure
customers cannot directly use technologies like Intel® Trusted
Execution Technology (Intel® TXT) to verify that the launch
of the hypervisor hosting their virtual machines was trusted.
(Microsoft presumably uses such technologies, but then again
the use of such technology is not a requirement of third-party
certifications such as the CSA’s CCM.) This is in stark contrast
to AWS, which explicitly states which processors different
Amazon EC2 instances use.21
Private Cloud SecuritySecurity does not feature prominently in Microsoft’s messaging
for Hyper-V or Microsoft’s positioning of Windows Server 2012
as the “Cloud OS.” Where it does arise, it most often focuses
on data protection and identity management. Even sources
such as the Windows Server Security page focus largely on
best practices independent of new features (indeed, the latest
Hyper-V Security Guide was published in March 2009).22
This can make a certain amount of sense: human error and
sloppy processes can subvert even the most robust security
Microsoft Cloud/Virtualized Infrastructure
technologies, and many security best practices change slowly.
However, Windows Server 2012 does contain new security
features that are directly applicable to server virtualization
and private clouds. Principal among these are the adoption
of Unified Extensible Firmware Interface (UEFI), Early Launch
Anti-Malware (ELAM), Measured Boot, and improvements to
BitLocker Drive Encryption*.
Cloud environments are target-rich environments for attackers.
Hijacking the hypervisor stack (hyperjacking) on cloud host
servers can provide attackers with a persistent vector for
monitoring and extracting data from multiple virtualized
workloads that is completely invisible to the anti-malware
systems on the individual virtual machines. Moreover, as
anti-malware software has grown better at detecting runtime
malware on host servers, attacks utilizing rootkits that can hide
from detection during the boot cycle are increasingly attractive
to would-be attackers. Thus some of the most important
Windows Server 2012 security features deal with securing host
servers during boot up.
Windows Server 2012 has replaced the traditional Basic Input/
Output System (BIOS) with the UEFI boot standard. Microsoft’s
version of UEFI (2.3.1) prevents boot code updates without
appropriate digital certificates and signatures. Windows Server
2012 also supports the UEFI Secure Boot specification, which
uses a public-key infrastructure (PKI) to verify the integrity of
the operating system to prevent unauthorized programs such
as rootkits from infecting the device while loading the operating
system during boot up.
Complementary with UEFI Secure Boot is ELAM. In order to
detect malware that starts during the boot cycle, anti-malware
vendors have often had to create system hacks not supported
by the host operating system in order to detect malware that
starts early in the boot cycle. Unfortunately, these workarounds
to facilitate anti-malware scans for boot-cycle malware could
actually place the computer in an unstable state. ELAM fixes
this situation. It ensures that only known, digitally signed anti-
malware programs can load right after the Windows kernel
finishes loading. Thus legitimate anti-malware programs
can get into memory and start working before fake antivirus
programs or other malicious code can.
Both UEFI Secure Boot and ELAM work in conjunction with
the new Measured Boot feature in Windows Server 2012.
Measured Boot measures each component, from firmware up
through the boot start drivers, stores those measurements in
the Trusted Platform Module (TPM) on the processor, and then
makes available a trusted log (that is, one resistant to spoofing
and tampering) of all boot components. Anti-malware software
can use this log to determine whether components that ran
before it are trustworthy or if they are infected with malware.
Server administrators can also examine the log to determine if
remediation is necessary.
Virtual Hard Disk (VHD) files for Hyper-V virtual machines must
be protected like any other critical file in the cloud. BitLocker
is much easier for administrators to use to encrypt server
files (including VHDs) in Windows Server 2012. Previously,
organizations that wished to implement BitLocker on servers
had to either settle for using the TPM-only mode (the weakest
protector of the many available) or requiring that a server
administrator be present for each server boot with a PIN,
password, or USB key. New BitLocker protectors added in
Windows Server 2012 allow server administrators to enable
disk encryption without having to be physically present at
boot time. For example, network-protector mode automatically
unlocks the encrypted disk as long as the server is network-
connected and joined to its normal AD DS domain. BitLocker
in Windows Server 2012 also supports hardware-encrypted
disks, AD DS account or group protectors, and cluster-aware
encryption that allows the disk to properly failover and be
unlocked to any member computer of the same cluster.
Public Cloud Identity ManagementA major challenge for IT departments integrating public clouds
with a corporate environment is identity management. If users
have to create their own logon credentials, IT managers have
to worry about things like weak passwords and cutting off
Microsoft Cloud/Virtualized Infrastructure
access to users who leave the company. Windows Azure
addresses these challenges with Windows Azure Active
Directory* (Windows Azure AD).
Windows Azure AD provides the core directory and identity
management capabilities for Microsoft cloud services, including
Windows Azure. Much as AD DS serves as the data store for
identities in on-premises environments, Windows Azure AD
provides a repository for all of an organization’s directory data in
the cloud. It is thus available to all of the Microsoft cloud services
to which an organization has subscribed.
Organizations can integrate Windows Azure AD with on-
premises deployments of AD DS. A primary benefit of doing
this is that once an organization has either synchronized its
directories or set up single sign-on (SSO) for one Microsoft
cloud service (such as Windows Azure), it does not need to
do so again for any other Microsoft cloud service. Thus if
an organization has set up SSO for Windows Azure, it does
not need to do so again for its subscription to Office 365 or
Exchange Online. Users connecting to those services from the
corporate network would automatically have access without
presenting their corporate domain credentials again and
without additional configuration from IT staff. Another benefit of
integrating Windows Azure AD with AD DS is that policies and
Group Policy Objects (GPOs) from the corporate domain can
be applied directly to the Windows Azure environment.
Integrating Windows Azure AD with AD DS does have
tradeoffs, however. For one, security and compliance reasons
might prevent a company from duplicating a full copy of its
directory services infrastructure off premises. And even if a
company can pursue a course of full synchronization with
Windows Azure, the benefits of doing so might be diluted
if it uses other cloud services, like Salesforce.com* or
Box*. A company that chooses to use a third-party identity
management service like Okta* or OneLogin* for its other
cloud services might find it more convenient to federate with
Windows Azure through that same service as well.
Because of the wide use of AD DS in enterprises, directory
synchronization and identity federation is a strength of
Windows Azure. But it is hardly a unique strength. AD DS can
federate with other cloud providers or with third-party identity
management services. And while a single synchronization
or federation for several Microsoft cloud services can be
attractive for some organizations, many others might use only
Windows Azure.
Other Cloud AttributesMicrosoft is much weaker in other cloud attributes. A principal
weakness is interoperability for management of hybrid clouds.
As noted in previous sections, Microsoft technology-based
management of heterogeneous-hypervisor environments,
while technically present, is wanting, particularly for managing
VMware technology-based environments. Moving workloads
to and from Hyper-V and other hypervisors can also be a far-
from-smooth process. Organizations might find that this leads
to vendor lock-in at some level if they use Windows Azure:
switching to another cloud provider (particularly providers like
Terremark, which exclusively use VMware) can prove costly in
time and effort.
Other cloud attributes, such as entitlement management, are
less weaknesses of Windows Azure than arenas in which
Microsoft can do little on its own. In the case of entitlement
management, for example, Microsoft has moved to keep its
licensing up-to-date with the unique requirements of cloud
deployments (particularly with its shift to licensing Windows
Server and System Center by OSE). However, short of
negotiating licensing deals with the makers of widely used line-
of-business (LOB) applications, there is little that Microsoft can
do on its own to make wider entitlement management easier in
Windows Azure.
ConclusionWindows Server 2012 marks a watershed for Microsoft server
virtualization. It marks the point at which Microsoft has largely
Microsoft Cloud/Virtualized Infrastructure
closed—and in some respects exceeded—the functionality
gap with archrival VMware. Hyper-V in Windows Server 2012
supports twice as many logical processors, active virtual
machines per host server, cluster nodes, and virtual machines
per cluster as VMware’s flagship enterprise virtualization
platform, VMware vSphere 5.1 Enterprise Plus. Moreover,
Hyper-V has some capabilities that VMware lacks altogether,
such as performing live migration over SR-IOV.
Microsoft is thus ready to compete with VMware on
capability in addition to price. Hyper-V and the broader
Microsoft virtualization vision—including network and storage
virtualization—is ready for the enterprise. In networking, the
Hyper-V Extensible Switch augments virtual machine portability
by making a virtual machine’s software switch port a property
of the virtual machine itself (and doing away with the need to
reconfigure the virtual machine’s port every time it is moved to
a different host server). Hyper-V support for NVGRE permits
it to isolate network traffic to virtual workloads (beyond 4,094
VLAN IDs) at a scale necessary for large-scale private or public
clouds. In storage, support for SMB-based storage, storage
spaces, and automatic storage tiering in Windows Server 2012
R2 gives companies enterprise-quality storage and availability
coupled using industry-standard hardware. In addition,
Windows Server 2012 supports unlimited simultaneous live
migrations, including live storage migration and shared-nothing
live migration.
And yet virtualization alone does not constitute a cloud any
more than a bare foundation is a building. Microsoft positions
Windows Server 2012 and Windows Azure, coupled with
System Center 2012, as the Cloud OS. Indeed, the majority
of the characteristics that define a cloud—notably user self-
service, elasticity, and measurement of service—all depend
on management. Microsoft portrays its management suite as
a single-pane-of-glass management solution for workloads
spanning private and public clouds. This is not technically
false, but it is only true in a rather narrow sense: System Center
can manage workloads in private-, hybrid-, or public-cloud
environments, but only if the public cloud is also managed by
System Center. Microsoft has a lot of terrain left to cover in its
ability to manage heterogeneous environments.
Microsoft has made great strides in its ability to manage
heterogeneous guest operating systems. Key Linux distributions
Red Hat Enterprise Linux, SUSE, CentOS, and Ubuntu enjoy
full support in Hyper-V and System Center VMM, and SUSE,
CentOS, and Ubuntu are even guest operating systems from
which customers can choose for building virtual machines
in Windows Azure. And new with Windows Server 2012 R2,
Red Hat Enterprise Linux, SUSE, CentOS, and Ubuntu will
include LIS for Hyper-V in their standard distributions. This deep
integration puts Microsoft in a strong position for growth in data
center virtualization: in its June 2013 Magic Quadrant report on
x86 server virtualization, Gartner estimated that only 35 to 45
percent of Linux workloads are virtualized (as opposed to 60 to
70 percent of Windows workloads).23
This strength in guest operating system management does not
carry over as fully to managing heterogeneous hypervisors.
System Center 2012 VMM supports Citrix XenServer v6.2.0
and System Center 2012 SP1 VMM supports VMware ESX
5.0 and ESXi 5.0 hypervisors. To some extent it is only logical
that Microsoft would need to include support for VMware
hypervisors: Microsoft’s increasing market share—particularly
in enterprise virtualization—has come in varying degrees at
the expense of VMware. Hyper-V adoption in many cases
spawned heterogeneous-hypervisor environments. The
wonder, then, is that Microsoft does not manage different
hypervisors better. Many operations involving VMware
hypervisors remain easier and more efficient using VMware
tools rather than VMM; some operations can only be done
on VMware hypervisors using VMware management tools.
It continues to be the case in many situations that it is easier
to use VMware to manage Hyper-V than to use VMM to
manage ESX and ESXi. However, Microsoft’s shortcomings in
interoperability are most pronounced in the public cloud.
Microsoft Cloud/Virtualized Infrastructure
Somewhat like Henry Ford famously offering cars in any color
a customer wanted so long as it was black, Microsoft tools
can manage any public cloud so long as it is built on Microsoft
technology. There are a couple of possibilities for this limitation.
It might only be an incremental step as Microsoft hones the
capabilities of System Center in the cloud before attempting to
build connectors to other cloud-providers’ offerings. It might
also be a strategic move to guide (or push) customers who
do not currently have strong ties to a cloud provider to one
using Microsoft technology (particularly Windows Azure). If
Microsoft’s reasoning is the latter, it might prove to be short-
sighted and parochial decision: while both Microsoft and
VMware offer public-cloud solutions, neither comes remotely
close to matching the market share of AWS. This is less of a
problem for VMware: a broad selection of third-party tools and
its 2012 acquisition of Dynamic Ops give VMware management
tools ample means to extend into AWS. Microsoft has no
such tools. System Center Operations Manager does have
a management pack for AWS, but it is a monitoring tool and
does not enable administrators to copy or operate workloads
running on AWS.
However, no wise observer should ever write off Microsoft in
any arena. Even taking these factors into consideration, this
is not the first time that Microsoft has successfully challenged
a heavily entrenched incumbent: Microsoft versus VMware
is just the most recent (if still undecided) example in a series
of contests stretching back to the early days of Microsoft
versus Apple. Microsoft has shown itself in several instances
in the past to have the will and the stamina to break into and
eventually dominate markets.
Microsoft also furnishes a number of its own counterexamples
to this pattern as well. While nothing indicates that Windows
Azure is another Zune* in the making, Microsoft’s move into
cloud computing (IaaS and PaaS with Windows Azure, SaaS
with Office 365) has to factor in many more variables than
previous campaigns such as Microsoft’s rise to ubiquity in the
enterprise in the 1990s. Microsoft is competing against a range
of tough incumbents: VMware for virtualization and Amazon (to
name just one) in public-cloud services.
Moreover, the entire business model of public cloud computing
is still in its early days. Courts and regulatory bodies in
developed economies might stifle the growth of public clouds
by limiting how many or what types of data or workloads
companies, and especially government organizations, can
house on multi-tenant clouds (especially in privacy-minded
Europe). Concerns about U.S. government surveillance and
information requests might linger and cause non-U.S. and
multinational companies to avoid any cloud provider with ties
to the United States. While this would not harm Microsoft
more than its principal rivals, it could dent growth prospects
for public-cloud usage in general. Enterprises using public
clouds on a large scale have not had to endure a big outage
yet. Windows Azure alone has had three significant outages
since February 2012. While no enterprise has faced disastrous
losses to operations from an outage to any public cloud
provider, it is not difficult to imagine that the first incident would
damage the reputation of that provider and possibly sour some
enterprises on the cloud more generally.
The competitive landscape in the cloud, and Microsoft’s place
in that landscape, continues to shift. While Microsoft has
largely caught up with VMware in terms of capabilities, this is
certainly not the beginning of the end of Microsoft’s journey
into the cloud. It might not even be the end of the beginning.
Microsoft Cloud/Virtualized Infrastructure
Scenario: 3 hosts, 2 processors each, 20 VMs (6:1 ratio),
support for 1 year
VMware
# of Units Product Price per Unit Subtotal
1 VMware vSphere Essentials Plus*
$4,495 $4,495
1 VMware vSphere Essentials Plus SnS Production (1 year)
$1,124 $ 1,124
20 Windows Server 2012 Standard*
$882 $17,640
Total $23,259
Microsoft
# of Units Product Price per Unit Subtotal
17 Windows Server 2012 Standard*
$882 $14,994
17 Microsoft Software Assurance
$177 $3,009
Total $18,003
Microsoft
# of Units Product Price per Unit Subtotal
3 Windows Server 2012 Datacenter*
$4,809 $14,427
3 Microsoft Software Assurance
$1,379 $4,137
Total $18,564
Scenario: 15 hosts, 2 processors each, 150 VMs (10:1 ratio),
support for 1 year
VMware
# of Units Product Price per Unit Subtotal
30 VMware vSphere Standard*
$995 $29,850
30 VMware vSphere SnS Production (1 year)
$323 $9,690
1 VMware vCenter* $4,995 $4,995
1 VMware vCenter SnS Production (1 year)
$1,249 $1,249
150 Windows Server 2012 Standard*
$882 $132,300
Total $178,084
Microsoft (Traditional Licensing)
# of Units Product Price per Unit Subtotal
30 Windows Server Datacenter*
$4,809 $144,270
30 Windows Server Datacenter Software Assurance
$1,379 $41,370
15 Microsoft System Center 2012 Datacenter*
$3,607 $54,105
Total $239,745
Microsoft (Enrollment for Core Infrastructure)
# of Units Product Price per Unit Subtotal
30 Microsoft ECI $5,056 $151,680
Total $151,680
Appendix A: Microsoft and VMware Licensing Breakdown
Microsoft Cloud/Virtualized Infrastructure
Scenario: 35 hosts, 2 processors each, 500 VMs (15:1 ratio),
support for 1 year
VMware
# of Units Product Price per Unit Subtotal
68 VMware vSphere Standard*
$995 $67,660
68 VMware vSphere SnS Production (1 year)
$323 $21,964
1 VMware vCenter* $4,995 $4,995
1 VMware vCenter SnS Production (1 year)
$1,249 $1,249
500 Windows Server 2012 Standard*
$882 $441,000
Total $536,868
VMware
# of Units Product Price per Unit Subtotal
68 VMware vSphere Standard*
$2,875 $195,500
68 VMware vSphere SnS Production (1 year)
$719 $48,892
1 VMware vCenter* $4,995 $4,995
1 VMware vCenter SnS Production (1 year)
$1,249 $1,249
500 Windows Server 2012 Standard*
$882 $441,000
Total $691,636
VMware
# of Units Product Price per Unit Subtotal
68 VMware vSphere Standard*
$3,495 $237,660
68 VMware vSphere SnS Production (1 year)
$874 $59,432
1 VMware vCenter* $4,995 $4,995
1 VMware vCenter SnS Production (1 year)
$1,249 $1,249
500 Windows Server 2012 Standard*
$882 $441,000
Total $744,336
Microsoft (Traditional Licensing)
# of Units Product Price per Unit Subtotal
68 Windows Server Datacenter*
$4,809 $327,012
68 Windows Server Datacenter Software Assurance
$1,379 $93,772
34 Microsoft System Center 2012 Datacenter*
$3,607 $122,638
Total $543,422
Microsoft (Enrollment for Core Infrastructure)
# of Units Product Price per Unit Subtotal
68 Microsoft ECI $5,056 $343,808
Total $343,808
Microsoft Cloud/Virtualized Infrastructure
Scenario: 25 hosts, 2 processors each, 300 VMs (6:1 ratio),
support for 3 years
VMware
# of Units Product Price per Unit Subtotal
50 VMware vSphere Enterprise Plus*
$3,495 $174,750
150 VMware vSphere SnS Production (per year)
$874 $131,100
1 VMware vCenter*
$4,995 $4,995
3 VMware vCenter SnS Production (per year)
$1,249 $3,747
300 Windows Server 2012 Datacenter*
$4,809 $1,442,700
Total $1,757,292
VMware
# of Units Product Price per Unit Subtotal
50 VMware vSphere Enterprise Plus*
$3,495 $174,750
150 VMware vSphere SnS Production (per year)
$874 $131,100
1 VMware vCenter* $4,995 $4,995
3 VMware vCenter SnS Production (per year)
$1,249 $3,747
300 Windows Server 2012 Standard*
$882 $264,600
Total $579,192
Microsoft (Traditional Licensing)
# of Units Product Price per Unit Subtotal
50 Windows Server Datacenter*
$4,809 $240,450
150 Windows Server Datacenter Software Assurance
$1,379 $206,850
25 Microsoft System Center 2012 Datacenter*
$3,607 $90,175
Total $537,475
Microsoft (Enrollment for Core Infrastructure)
# of Units Product Price per Unit Subtotal
50 Microsoft ECI $5,056 $252,800
Total $252,800
Microsoft Cloud/Virtualized Infrastructure
Notes1 This graphic was originally created by Microsoft and is the property of Microsoft.
2 Gartner. “Magic Quadrant for Cloud Infrastructure as a Service.” August 2013. https://www.gartner.com/technology/reprints.do?id=1-1IMDMZ8&ct=130819&st=sb
3 The Wall Street Journal. “Microsoft Quietly Gains Share in Virtualization.” May 2013. http://blogs.wsj.com/digits/2013/05/31/microsoft-quietly-gains-share-in-virtualization/
4 Microsoft. “Server Virtualization Features.” http://www.microsoft.com/en-us/server-cloud/windows-server/server-virtualization-features.aspx
5 ESG published three separate reports finding that virtualization with Hyper-V is ready for tier-1 enterprise workloads based on test runs virtualizing Microsoft SharePoint*, Microsoft
Exchange*, and Microsoft SQL Server 2012*. See:
• ESG. “Microsoft Windows Server 2012 with Hyper-V and SharePoint 2013.” June 2013. http://www.esg-global.com/lab-reports/microsoft-windows-server-2012-with-hyper-v-
and-sharepoint-2013/
• ESG. “Microsoft Windows Server 2012 with Hyper-V and Exchange 2013.” April 2013. http://www.esg-global.com/lab-reports/microsoft-windows-server-2012-with-hyper-v-
and-exchange-2013/
• ESG. “Workload Performance Analysis: Microsoft Windows Server 2012 with Hyper-V and SQL Server 2012.” November 2012. http://www.esg-global.com/lab-reports/work-
load-performance-analysis-microsoft-windows-server-2012-with-hyper-v-and-sql-server-2012/
6 Gartner. “Magic Quadrant for x86 Server Virtualization Infrastructure.” June 2013. http://www.gartner.com/technology/reprints.do?id=1-1GJA88J&ct=130628&st=sb
7 Announcements of NVGRE support by Mellanox Technologies, Emulex, and Intel respectively available at:
• Mellanox Technologies. “Mellanox 10GbE NVGRE Adapter Delivers 65 Percent More Bandwidth in Windows Server Hyper-V Network Virtualization Environment.” June 2013.
http://ir.mellanox.com/releasedetail.cfm?ReleaseID=768704
• Emulex. “Emulex to Support High Performance Virtual Network Fabrics in Windows Server 2012 Hyper-V Environments.” June 2013. http://www.emulex.com/company/me-
dia-center/press-releases/2013/june3-2.html
• Intel. “Chip Shot: Intel Support Microsoft Cloud Effort.” September 2012. http://newsroom.intel.com/community/intel_newsroom/blog/2012/09/04/chip-shot-intel-support-mic-
rosoft-cloud-effort
8 Windows Server Blog. “Transforming your Datacenter with Software-Defined Networking (SDN): Part I.” June 2013. http://blogs.technet.com/b/windowsserver/archive/2013/06/06/
transforming-your-datacenter-with-software-defined-networking-sdn.aspx
9 United States Patent 8489699. “Live migration of virtual machine during direct access to storage over SR IOV adapter.” http://www.freepatentsonline.com/8489699.html
10 ESG. “Microsoft Windows Server 2012.” http://www.esg-global.com/lab-reports/microsoft-windows-server-2012/
11 Ibid.
12 NIST cloud definition at http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf; Microsoft spells out its definition for the private cloud here: http://www.microsoft.com/
en-us/server-cloud/private-cloud/default.aspx
13 Edison Group. “Comparative Management Cost Study: The Management Cost Penalty of Using Microsoft System Center to Manage VMware vSphere.” August 2012. http://www.
theedison.com/pdf/2012_VMware_vSphere_vs_Microsoft_System_Center.pdf
14 Microsoft. “Managing VMware ESX Hosts Overview.” May 2013. http://technet.microsoft.com/en-us/library/gg610683.aspx
15 Gartner. “Magic Quadrant for x86 Server Virtualization Infrastructure.” June 2013. http://www.gartner.com/technology/reprints.do?id=1-1GJA88J&ct=130628&st=sb
16 Windows Server Blog. “Open Management Infrastructure.” June 2012. http://blogs.technet.com/b/windowsserver/archive/2012/06/28/open-management-infrastructure.aspx
17 Windows Azure Trust Center. http://www.windowsazure.com/en-us/support/trust-center/
18 British Standards Institution. http://www.bsiamerica.com/en-us/Assessment-and-Certification-services/Management-systems/Certificate-and-Client-Directory-search/Search/
Search-Results/?pg=1&licencenumber=IS+577753&searchkey=licenceXeqX577753
19 Microsoft. “Windows Azure Receives SOC 2 Type 2 report with CSA CCM Attestation.” August 2013. http://blogs.msdn.com/b/azuresecurity/archive/2013/08/22/windows-azure-
receives-soc-2-type-2-report-with-csa-ccm-attestation.aspx
20 Confirmed through discussion with Windows Azure sales specialist. September 10, 2013.
21 Amazon. “Amazon EC2 Instances.” http://aws.amazon.com/ec2/instance-types/
22 See the Windows Server Security page at http://technet.microsoft.com/en-us/windowsserver/ff843381.aspx; the Hyper-V Security Guide is available for download at http://tech-
net.microsoft.com/en-us/library/dd569113.aspx
23 Gartner. “Magic Quadrant for x86 Server Virtualization Infrastructure.” June 2013. http://www.gartner.com/technology/reprints.do?id=1-1GJA88J&ct=130628&st=sb
Microsoft Cloud/Virtualized Infrastructure
The analysis in this document was done by Prowess Consulting and derived from work done with Intel.
Results have been simulated and are provided for informational purposes only. Any difference in system hardware or software design or
configuration may affect actual performance.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
Prowess, the Prowess logo, and SmartDeploy are trademarks of Prowess Consulting, LLC.
Copyright © 2014 Prowess Consulting, LLC. All rights reserved.
Other trademarks are the property of their respective owners.