From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From...

40
From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing Manager, Virtualization Technologies Data Center Group, Intel DCCS002

Transcript of From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From...

Page 1: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing WangProduct Marketing Manager, Virtualization TechnologiesData Center Group, Intel

DCCS002

Page 2: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

• Data Center Evolution• A Framework for Optimization• Technology Drill-down• Summary

Agenda

2

Page 3: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

• Data Center Evolution• A Framework for Optimization• Technology Drill-down• Summary

Agenda

3

Page 4: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

“Cloud Computing is THE critical ingredient for the next wave of IT Growth.”

Frank Gens, IDC

SVP and Chief Analyst

“Virtualization is the bridge to consuming Cloud Computing”

IDC, January 2009

4

Page 5: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Virtualization: Evolving Towards the Enterprise Cloud

Consolidation: Virtualization 1.0Operational Expense Efficiency

Enterprise Cloud:

Virtualization 3.0Automation and Resource Scalability

Flexible Resource Management: Virtualization 2.0

Dynamic Resource Allocation

5

Presenter
Presentation Notes
In 2006 IDC dubbed the virtualization market for consolidation as Virtualization 1.0. As virtualization technologies such as Intel VT FlexMigration were introduced in 2007, analysts hailed these capabilities as the emergence of the next generation of virtualization transformation. With the emergence of Cloud Computing, many have believe this movement and the technologies being developed to be the next generation of Virtualization or Virtualization 3.0. In this environment, the Xeon 5500 with the performance, energy efficiency and intelligence built in is a critical ingredient; but it is not the only ingredient. If you take a look at what it is going to take to get to this scalable infrastructure for the next generation internet data centers really need several things:   A unified network to combine storage and network traffic onto a single 10Gb Ethernet network, which brings management simplicity and greater workload migration flexibility to IT. Maximum flexibility to pool server resources together and use them on an on demand fashion; this requires a level of instruction set compatibility. Obviously, we are in a time when energy savings are crucial and many data centers and server rooms around the world are power constrained and space constrained. We need breakthroughs in the ability for those data centers to work at peak power efficiency. Balanced platform performance. We have to have infrastructures that can adapt to the needs of the workloads even when we are migrating workloads from different servers and different points of time of the demand. Let’s discuss some of the usage models in Virtualization….(Next slide)
Page 6: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Virtualization Data Center Foundation

Infrastructure Scale

Massive Scaling

Balanced Platform

RAS is critical

Unified Network(10Gb Ethernet for Storage and Networking)

Intel® Xeon® Processor Based Platform (CPU & I/O performance, New Technologies)

Lower TCO

Network

consolidation

Multi-Tenancy

Resource Sharing

Security

Isolation

Data

Cen

ter

Req

uir

em

en

tsD

ata

cen

ter

Fo

un

dati

on

Page 7: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

• Data Center Evolution• A Framework for Optimization• Technology Drill-down• Summary

Agenda

7

Page 8: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

8 88

Intel® Virtualization TechnologyComprehensive platform capabilities to achieve native performance, with best-in-class reliability,

security, and scalability

Intel® VT for IA-32 & Intel® 64(Intel® VT-x)Native performance of

virtualized workloads with security and reliability

Intel® VT for Directed I/O

(Intel® VT-d)Performance, reliability and security through dedication of system

resources

Intel® VT for Connectivity

(Intel® VT-c)Performance, and

scalability through a dynamically sharable

converged high-capacity interconnect

ProcessorChipset

Network

Intel® VT – Vision and Directions

8

Page 9: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

A Framework for Optimizing Virtualization

VMM

OS

Host

OS OS…

… and scale out across VMs

Reduce overhead

within each VM…

Ways to Reduce Overheads• VT-x Latency Reductions

• Virtual Processor IDs

• Extended Page Tables

• APIC Virtualization (Flex Priority)

• I/O Assignment via DMA Remapping (VT-d)

• VMDq

Technologies for Scaling• Scaling from EP to EX

• Hyper Threading

• PAUSE-loop Exiting

• SR-IOV

• RAS Capabilities

Let’s look at the details…

VT-x = Intel® Virtualization Technology for IA-32, Intel® 64 and Intel® Architecture9

Page 10: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

• Data Center Evolution• A Framework for Optimization• Technology Drill-down• Summary

Agenda

10

Page 11: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Memory

Optimizing VT Transition Latencies

• Virtual Machine Control Structure (VMCS)– VMCS holds all essential guest and host register state– “Backed” by host physical memory– Accessed via an architectural VMREAD / VMWRITE interface– Enables CPU implementations to cache VMCS state on-die

• Virtual Processor IDs (VPIDs)– VMM-specified values used to tag microarchitectural structures (TLBs)– Used to avoid TLB flushes on VT transitions

VMM

pCPU

VMREAD VMWRITE

BackingPage

inMemory

VMCS

Significant VT latency reductions over time…

0200400600800

10001200140016001800

2007 2008 2009 2010

Round-Trip VM exit/entry (Cycles)

11

Page 12: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Intel® Hyper-Threading Technology

• Intel® Hyper-Threading Technology (Intel®HT Technology) – Run 2 threads at the same time per core– Hide memory latencies– Higher utilization of 4-wide execution engine

• Significant benefits to virtualization– Abundant thread parallelism– Multiple vCPUs per VM– Multiple VMs per system

Net Result: Increased Scaling on Virtualization Workloads

Tim

e (p

roc.

cyc

les)

w/o HT HT

Note: Each box represents a processor

execution unit

12

Page 13: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Lock-holder Preemption

• A given virtual processor (vCPU) may be preempted while holding a lock

• Other vCPUs that try to acquire lock will spin for entire execution quantum

13

Page 14: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Solution: PAUSE-loop Exiting

• PAUSE instructions often used in spin locks– A long sequence of PAUSE’s close together signals

likely lock-holder preemption

• PAUSE-loop Exiting enables VMM to specify:– Gap: Maximum expected time between successive

executions of PAUSE in a loop– Window: Maximum time to spend in a PAUSE loop

before causing a VM exit

• Upon a PAUSE-loop Exit– VMM can then regain control and schedule another

vCPU to run

14

Page 15: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Example: PAUSE-loop Exiting

Executions of PAUSE in instruction stream

spin_lock:attempt lock acquire;if fail {

PAUSE;jmp spin_lock

}

Likely lock-holder preemption:

Likely normal locking behavior:

Instruction stream (time)

WindowGap

GapWindow

Page 16: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

PAUSE-loop Exiting Performance

16

• Pause-Loop Exiting (PLE) delivers ~40%performance improvement in over-committed case

• CPU Efficiency stays flat with PLE which implies a better scaling

Page 17: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Server

I/O Evolution is ImminentFlexible Resource Architectures

VM VM

Converged I/O

Intel

HBA NIC

SAN LAN

Fungible and Secure I/O architecture is essential for a flexible enterprise cloud

Isola

tion

17

Page 18: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Infrastructure Scale: I/O Virtualization with SR-IOV

Drive adoption of SR-IOV for I/O scaling

Natively Share I/O device with Multiple VMs

Standards based

Reduces VMM overhead

Works with VM migration

Inherently provides inter-VM data isolation

VM VM VM

SR-IOV I/O device

18

Page 19: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Citrix* Standards-based I/O Virtualization with Intel® Xeon®

Processor 5600

Citrix and Intel Deliver Flexible and Scalable High Performance

I/O Virtualization Solution via PCI-SIG* SR-IOV and Intel VT-d

02468

101214161820

Non SR-IOV SR-IOV

2 VPXs Aggregated Throughput

Gbps

Please come to see the demo at Booth 1096

7X

Intel® VT for Directed I/O (Intel® VT-d)19

Page 20: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

RAS at Every Part of the Platform

Socket RAS• Corrected Machine Check Interrupts (CMCI)• Recoverable Machine Check Architecture (MCA)

Memory RAS• Replay on CRC error• Memory thermal management• Memory Migration, Mirroring, and hot plug

Intel® QPI RAS• Link recovery and self-healing• Poison forwarding• Hot plug socket, hot plug IOH• Domain partitioning

IO Hub (IOH) RAS• PCI Express* Technology hot plug• PCI Express Technology Advanced

Error Reporting (AER)

Intel® QPI

Xeon 7500

Xeon 7500

Intel® Xeon®

processor 7500

Xeon 7500

PCI Express* 2.0PCI Express* 2.0

MemoryMemory

ICH10

IOH IOH

MemoryMemory

Intel® QPI = Intel® QuickPath Interconnect

20

Page 21: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

21

HW Un-correctable Errors

Machine Check Architecture RecoveryFirst Machine Check Recovery in Intel® Xeon® processor based SystemsPreviously seen only in RISC, mainframe, and Itanium®-based systems

*Errors detected using Patrol Scrub or Explicit Write-back from cache

Allows Recovery From Otherwise Fatal System Errors

Normal StatusWith Error Prevention

System Recovery with OS

Error Corrected

Error Detected*

Error Contained

HW Correctable ErrorsUn-correctable Errors

System works in conjunction with OS or

VMM to recover or restart processes and

continue normal operation

Bad memory location flagged so data will not

be used by OS or applications

Error information passed to OS /

VMM

Presenter
Presentation Notes
Key Point/Purpose of this slide: Explain MCA-recovery to show an example of where the silicon and OS work together to keep the system up and running. Machine Check Architecture recovery is a mechanism where the silicon works with the operating system to allow a server to recover from uncorrectable memory errors which would have otherwise caused a system crash in prior generations. This capability has been available on RISC, Mainframe, and Itanium systems for some time, but this is the first time it has been implemented in a Xeon-based system. In this first implementation, MCA-r allows the OS to recover when uncorrectable errors are discovered in memory during either an explicit write-back operation from cache, or by a patrol scrub which examines every server memory location daily. Uncorrectable errors are typically multi-bit errors that cannot be corrected by error correcting code (ECC). It should be noted that the occurrence of these errors is rare. When an uncorrectable error is detected, the silicon interrupts the OS and passes it the address of the memory error. The OS then determines whether this memory location is vital to the continued operation of the system. If not, the OS marks the defective memory location so it will not be used again, resets the error condition, and the system keep running. In cases where the memory location is being used for a critical kernel operation or application, the system or application will not be able to continue and will be shut down by the OS as before. Note: while MCA-r is a big step forward in XEON RAS capabilities, this generation does not yet equal the full recovery capabilities of Itanium or some RISC systems. For most customers, the addition of this capability as implemented will be significant, even though it is a first step. This presentation does not go into the detailed differences in implementations, but the presenter should acknowledge that there are differences if asked and offer a follow up discussion to go into more detail.
Page 22: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Making Resource Sharing Safer

VM VM VM VMIsolate

Intel® Virtualization and Intel® Trusted Execution Technology (Intel® TXT) work

together to better isolate VMs

MeasureIntel® TXT measures VMM for launch

protection

EncryptIntel® New instructions in Intel® Xeon® 5600

series quickly encrypts data in flight and at rest

Virtual Machine Monitor

Intel® TXT and New instructions in Intel® Xeon® 5600 series Make Multi-Tenancy More Secure

DCCS003: Data Center Protection Technologies

22

Page 23: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Intel Virtualization Performance (by processor generations)

Intel® Xeon® Processors Optimized for Virtualization

23

Inte

l Est

imat

e

Inte

l Est

imat

e

Inte

l Est

imat

e

Inte

l Est

imat

e

Unless noted otherwise, source: http://www.vmware.com/products/vmmark/results.html (Best published Intel platform scores as of 3/30/10)1. Source: http://www.cisco.com/en/US/prod/collateral/ps10265/ucsB250-VMmark.pdf2. Source: ftp://ftp.software.ibm.com/eserver/benchmarks/IBM_x3850X5_VMmark_Independent_Publication_033010.pdf

9

25

36

1421

72

2S Intel Xeon

X5470 3.33 GHz

4-core

2S Intel Xeon

X5570 2.93 GHz

4-core

2S Intel Xeon

X5680 3.33 GHz

6-core

4S Intel Xeon

X7350 2.93 GHz

4-core

4S Intel Xeon

X7460 2.66 GHz

6-core

4S Intel Xeon

X7560 2.26 GHz

8-core

VMmark* Scores

1 2

Page 24: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Virtualized SQL* Database Performance Large Healthcare Provider Case Study

HP* DL380 G6 w/ VMware vSphere* 4 and two VMs double native DL380 G5 transaction rate at less CPU utilization

G6 is much more power efficient at load and at idle

Response time differences are nominal from native G5 to Virtualized G6

Power in Watts

G5 G6

Avg. Test 308 272

Peak Test 372 294

Lowest Test 270 252

Avg. Idle 254 140

Peak Idle 256 142

Lowest Idle 253 137

2X Transactions, Lower CPU Utilization, Lower Power Envelope

3

Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations -- Source: Intel Labs. See backup for detailed configurations24

Presenter
Presentation Notes
Label power DL380 G6 w/2 VMs show decrease in response time averaged across both VMs DL380 G6 w/1 VM show 2 ms increase in response time Net – Customer should be able to achieve twice the work in a lower power envelope DL380 G6 w/2 VMs show decrease in response time averaged across both VMs DL380 G6 w/1 VM show 2 ms increase in response time
Page 25: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Example of Direct AssignmentFedEX* Case Study

Intel 2x10GbE Directly connected

Directly Assigned to VM

File Transfer Direction

Intel® Xeon® ProcessorX5500 Series

VMware* ESX* 4.0

VM1 (8vCPU)

RHEL* 5.3

File Transfer Applications

Intel® Xeon® ProcessorX5500 Series

VMware ESX 4.0

VM1 (8vCPU)

RHEL 5.3

File Transfer Applications

Near Native Performance with

VMDirectPathutilizing Intel® VT-d

File Copy AppsApps limited by Crypto

IOV – Real Value in the Real World

Intel® Virtualization Technology (Intel® VT) for directed I/O (Intel® VT-d); NHM = Intel® microarchitecture, codename Nehalem

Source: Intel Labs, June, 2009. Performance measured using the netperf benchmark and the listed applications running on Intel Xeon X5560 @ 2.8GHz. See test configuration slides for more details. Actual performance may vary.

25

Presenter
Presentation Notes
Opportunities: Solutions vs. benchmarks Management and optimization of I/O
Page 26: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

• Data Center Evolution• A Framework for Optimization• Technology Drill-down• Summary

Agenda

26

Page 27: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Summary

• Data Center compute models are evolving

• Evolution from Virtualization 2.0 to Enterprise Cloud drives:– Need for lower VM overheads– Increased scaling (VMs, vCPUs)– Improved reliability– Effective I/O Virtualization– Security

• Intel® Virtualization provides a rich technology portfolio to meet these needs

27

Page 28: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Session Presentations - PDFs

The PDF for this Session presentation is available from our IDF Content Catalog at the end of the day at:

intel.com/go/idfsessionsBJ

URL is on top of Session Agenda Pages in Pocket Guide

28

Page 29: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Please Fill out the Session Evaluation Form

Give the completed form to the room monitors as you

exit!

Thank you for your input, we use it to improve future Intel Developer Forum

events

29

Page 30: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Q&A

30

Page 31: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Legal Disclaimer• INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE,

EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL® PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS.

• Intel may make changes to specifications and product descriptions at any time, without notice.• All products, dates, and figures specified are preliminary based on current expectations, and are subject to

change without notice.• Intel, processors, chipsets, and desktop boards may contain design defects or errors known as errata, which

may cause the product to deviate from published specifications. Current characterized errata are available on request.

• Nehalem and other code names featured are used internally within Intel to identify products that are in development and not yet publicly announced for release. Customers, licensees and other third parties are not authorized by Intel to use code names in advertising, promotion or marketing of any product or services and any such use of Intel's internal code names is at the sole risk of the user

• Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance.

• Intel, Xeon, and the Intel logo are trademarks of Intel Corporation in the United States and other countries. • *Other names and brands may be claimed as the property of others.• Copyright © 2010 Intel Corporation.

31

Page 32: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Risk FactorsThe above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Many factors could affect Intel’s actualresults, and variances from Intel’s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the corporation’s expectations. Demand could be different from Intel's expectations due to factors including changes in business and economic conditions; customer acceptance of Intel’s and competitors’ products; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to forecast. Additionally, Intel is in the process of transitioning to its next generation of products on 32nm process technology, and there could be execution issues associated with these changes, including product defects and errata along with lower than anticipated manufacturing yields. Revenue and the gross margin percentage are affected by the timing of new Intel product introductions and the demand for and market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing programs and pricing pressures and Intel’s response to such actions; defects or disruptions in the supply of materialsor resources; and Intel’s ability to respond quickly to technological developments and to incorporate new features into its products. The gross margin percentage could vary significantly from expectations based on changes in revenue levels; product mix and pricing; start-up costs, including costs associated with the new 32nm process technology; variations in inventory valuation, including variations related to the timing of qualifying products for sale; excess or obsolete inventory; manufacturing yields; changes in unit costs; impairments of long-lived assets, including manufacturing, assembly/test and intangible assets; the timing and execution of the manufacturing ramp and associated costs; and capacity utilization;. Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's products and the level of revenue and profits. The majority of our non-marketable equity investment portfolio balance is concentrated in companies in the flash memory market segment, and declines in this market segment or changes in management’s plans with respect to our investments in this market segment could result in significant impairment charges, impacting restructuring charges as well as gains/losses on equity investments and interest and other. Intel's results could be impacted by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, itscustomers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Intel’s results could be affected by the timing of closing of acquisitions and divestitures. Intel's results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust and other issues, such as the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an injunction prohibiting us from manufacturing or selling one or more products, precluding particular business practices, impacting our ability to design our products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other risk factors that could affect Intel’s results is included in Intel’s SEC filings, including the report on Form 10-Q.

Rev. 1/14/10

32

Page 33: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Backup Slides

33

Page 34: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Memory Virtualization with Intel® VT

CPU0

VMM

I/OVirtualizationVT-x

with EPT

VM0 VMn

ExtendedPage Tables

(EPT)

EPTWalker

No VM Exits

Extended Page Tables (EPT)• Map guest physical to host address• New hardware page-table walker

Performance Benefit• Guest OS can modify its own page

tables freely• Eliminates VM exits

Memory Savings• Shadow page tables required for

each guest user process (w/o EPT)

• A single EPT supports entire VM

Page 35: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

35

Intel® VT FlexPriority: Performance Benefits

Legacy 32-bit guests frequently access MMIO TPR during I/O intensive workloads

Eliminates most VM exits due to guest TPR access, reducing virtualization overhead and improving I/O throughput.

Intel® VT FlexPriority improves performance of 32-bit guests

w/o Intel® VT

FlexPriority

w/ Intel® VT

FlexPriority

Windows* XP SP3Ntttcp Throughput

Source: Intel. Performance measured using the Ntttcp benchmark comparing network performance of a 2 VP Windows* XP SP3 32-bit guest running with and without Intel® VT FlexPriority feature. System configuration: 2x 3.20GHz Quad-Core Intel® Xeon® processor X5386

on Stoakley platform with Intel® 5400 chipset, Intel® Pro/1000 MT Dual Port Server Adapter. Actual performance may vary.

10.7x

1

10.7

Rel

ativ

e Pe

rfor

man

ce

Presenter
Presentation Notes
Ntttcp receiver command: ntttcpr.exe -a 6 -l 2048 -n 65536 -m 4,0,<receiver IP> Ntttcp sender command: ntttcps.exe -a 2 -l 2048 -n 65536 -m 4,0, <receiver IP>
Page 36: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

36

VMM

Virtual Device Emulation

Intel® VT for Directed I/ODelivering I/O Performance

VM

BufferDriver A

Application

VM

VMM

I/O Device Hardware

Device A|||||||||||||||||||

Chipset

Device A|||||||||||||||||||

BufferDriver A

Emulation Based Virtual I/O Direct Assigned I/Owith Intel VT-d

BufferDriver A

ApplicationOS OS

Intel VT-d Remap

Native Guest OS Driver

No HW Changes

Eliminates Intermediate Paths and VMM Overheads

I/O Intensive Workload Benefit Most

Presenter
Presentation Notes
It will benefit the workload for a lot of IO operations rather than the huge bandwidth. So the message here is VT-d especially benefit for the workload for small block access, IOPS intensive scenario.  
Page 37: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Layer 2 Classified Sorter

RxnRx1Rx2Rx1

RxnRx1

Idle

Tx2TxnTxn

Idle

Tx2Txn

Tx1

MAC/PHYNIC w/VMDq

Virtual Machine Device Queues (VMDq)Improved throughput by offloading data sorting to the NIC

VM1(vNIC)

VM2(vNIC)

VMn(vNIC)

Layer 2 Software SwitchVMM

LAN

Transmit Path:

• Round-robin servicing

• Ensures transmit fairness across VMs

• Prevents head-of-line blocking

Receive Path

• Data packets for different VMs get sorted at the Ethernet silicon based on MAC address/VLAN tags

• Sorted data packets get parsed to the respective VMs

• Data packets being received by respective VMs

Rx1Rx2Rx1RxnRx1Rxn

Tx1 TxnTx2Tx2 TxnTxn

VMDq enhancements

• Weighted round robin Tx

• Loopback functionality

• Multicast/broadcast support

h/w available

since 2008

Page 38: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Native DL380 G5 DL 380 G5 w/ESX3.5 VM Native DL380 G6

Operating System Windows 2003 32BitESX 3.5 w/Windows 2003 32Bit VM ESX 4 w/Windows 2003 32Bit VM

# CPU 4 cores only 4 vCPU only 4 cores only

Memory 12 GB (6 2GB DIMMs) 12 GB (6 2GB DIMMs)12GB (3 4GB DIMMs, 1 per channel)

Network 1GbE 1GbE 1GbE

Storage

Dual 4G FC –•1 100Gb Lun RAID 5? – X 10k spindles for logs•1 400Gb Lun RAID 5? – X 10k spindles for database

Dual 4G FC –•1 100Gb Lun RAID 5? – X 10k spindles for logs•1 400Gb Lun RAID 5? – X 10k spindles for database

Dual 4G FC –•1 100Gb Lun RAID 5? – X 10k spindles for logs•1 400Gb Lun RAID 5? – X 10k spindles for database

Software SQL 2005 32bit SQL 2005 32bit SQL 2005 32bit

Workload Humana In-house application Humana In-house application Humana In-house application

Health Care Provider Test Case Configurations (1)

DL380 G6 w/ESX3.5 DL380 G6 w/ESX4.0 VM DL380 G6 w/ 2 ESX4.0 VM’s

Operating SystemESX 3.5 w/Windows 2003 32Bit VM

ESX 3.5 w/Windows 2003 32Bit VM

vShere 4 w/Windows 2003 32bit VM

# CPU 4 vCPU only 4 vCPU only 4 vCPU only per VM

Memory12 GB per VM, 24GB per svr (6 4GB DIMMs, 1 per channel)

12 GB per VM, 24GB per svr (6 4GB DIMMs, 1 per channel)

12 GB per VM, 24GB per svr (6 4GB DIMMs, 1 per channel)

Network 1GbE 1GbE 1GbE

Storage

Dual 4G FC –•1 100Gb Lun RAID 5? – X 10k spindles for logs•1 400Gb Lun RAID 5? – X 10k spindles for database

Dual 4G FC –•1 100Gb Lun RAID 5? – X 10k spindles for logs•1 400Gb Lun RAID 5? – X 10k spindles for database

Dual 4G FC –•1 100Gb Lun RAID 5? – X 10k spindles for logs•1 400Gb Lun RAID 5? – X 10k spindles for database

Software SQL 2005 32bit SQL 2005 32bit SQL 2005 32bit

Workload Humana In-house application Humana In-house application Humana in-house application38

Presenter
Presentation Notes
Need to show DIMM configuration for each of the configurations. For the G6, we need to depopulate a CPU and put 1 4GbE DIMM per channel for the native run. G5 has 16 and need to go to 12G so depopulate 2 2G DIMMs. G6 has 4G DIMMs so depopulate down to 6 4GB DIMMs or one per channel. SAN/DAS configuration ESX install: DAS (min 2xspindles R1)* OS Datastore: DAS (min 2xspindles R1) or SAN (min 4xspindles R5)* SQL tlog datastore: SAN (lun consistent with Humana's production SQL tlog standard) SQL DB datastore: SAN (lun consistent with Humana's production SQL tlog standard) Image file 363 GB Have a 400GB for database and a 100GB for logs Day 1 on physical DL380 G5 Kevin G’s database for testing SQL on ESX performance relative to physical config. Bulk inserts, rebuilding rows, backup, etc. WQS 14 – run for appropriate time WTS 28 – run for appropriate time Day 2 on ESX3.5 DL380 G6 Kevin G’s database for testing SQL on ESX performance relative to physical config. Bulk inserts, rebuilding rows, backup, etc. WQS 14 – run for appropriate time WTS 28 – run for appropriate time Day 3 on ESX4 DL380 G6 Kevin G’s database for testing SQL on ESX performance relative to physical config. Bulk inserts, rebuilding rows, backup, etc. WQS 14 – run for appropriate time WTS 28 – run for appropriate time
Page 39: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

Native Server (Customer Config) ESX3.5 VM vSphere VM

NIC Settings

•VMXNET 3.0 network drivers•Set the Net.CoalesceLowRxRateand Net.CoalesceLowTxRate on the host to 4•Set the Net.vmxnetThroughputWeight on the host to 0

FC HBA Settings•PVSCI driver•Virtual Interrupt Coalescing

•PVSCI driver•Virtual Interrupt Coalescing

BIOS Settings SMT off, AWE on, PAE on SMT off, AWE on, PAE on SMT off, AWE on, PAE on

SQL Server Settings 10GB 10GB 10GB

VM Settings Large Pages

Health Care Provider Test Case Configurations (2)

39

Presenter
Presentation Notes
Clean up the ESX3.5 configuration settings… Currently all buffers are set to the default. Need to decide how much memory for the SQL buffer… need to look into enabling AWE bit….also enable PAE likely set SQL server to use 10GB of memory…
Page 40: From Virtualization 2.0 to Enterprise Cloud: Technologies and … · 2013. 2. 26. · From Virtualization 2.0 to Enterprise Cloud: Usages and Technologies Bing Wang Product Marketing

FedEx Case Study - Test-bed Configuration

Hardware Xeon CPU X5560 @ 2.8 GHz (8 cores, 16 threads); SMT, NUMA, VT-x, VT-d, EIST, Turbo Enabled (default in BIOS); 24GB Memory; Intel 10GbE CX4 Server Adapter with VMDq

Test Methodology Ramdisk used, not disk drives. We are focused on network I/O, not disk I/O

What is being transferred?

Directory structure, part of Linux repository: ~8G total, ~5000 files, variable file size, average file size ~1.6MB

Data Collection Tool ESXTOP used to capture CPU utilizationReceive throughput captured with sar in VM

VM configuration 8 VMs and each VM is configured with 1 vCPU, 2GB RAM & RHEL * 5.3 (64 bit)

Application Tools used in VM

Netperf (common network micro-benchmark); OpenSSH, OpenSSL (standard Linux layers); HPN-SSH (optimized version of OpenSSH); scp, rsync (standard Linux file transfer utilities); bbcp (“bit-torrent-like” file transfer utility); open-iscsi(initiator); iet (iSCSI target)

Source Server

Intel Oplin 10GbE CX4

Directly connected back-to-back

Directly assigned to VM

File Transfer Direction

Destination Server

Intel Xeon X5500 Series

VMware* ESX* 4.0

VM1 (8vCPU)

RHEL* 5.3

File Transfer Applications

Intel Xeon X5500 Series

VMware* ESX* 4.0

VM1 (8vCPU)

RHEL 5.3

File Transfer Applications

40