VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to...

17
VMware ESX Server 2.1 Software Memory Sizing Guide for Dell PowerEdge Systems Version 1.0 Dell, Inc. Page 1 June, 2004

Transcript of VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to...

Page 1: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

VMware ESX Server 2.1 Software Memory Sizing Guide for Dell

PowerEdge Systems

Version 1.0

Dell, Inc. Page 1 June, 2004

Page 2: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Dell, Inc. Page 2 June, 2004

Page 3: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

CONTENTS

1. INTRODUCTION 4

2. UNDERSTANDING THE “VMS AS PROCESSES” MODEL 5

3. UNDERSTANDING VMWARE ESX SERVER GLOBAL MEMORY MANAGEMENT 6

4. TESTING AND CHARACTERIZATION METHODOLOGY 7

5. STUDY DESIGN 8

6. RESULTS: VM MEMORY ALLOCATION (M) 10

7. RESULTS: PHYSICAL MEMORY SIZE (R) 13

8. CONCLUSION 14

9. FUTURE WORK 15

10. ADDITIONAL RESOURCES 16

Dell, Inc. Page 3 June, 2004

Page 4: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

1. Introduction VMware® ESX Server™ software provides the rapid creation of multiple virtual machines (VMs) on a single physical server, as shown in Figure 1 – VMware ESX Server Architecture Overview. Each VM runs in a resource-isolated, secure environment that includes an operating system (OS) with associated applications. The figure shows four “guest” operating systems of three varieties: Microsoft® Windows® 2000, Microsoft Windows NT®, and Linux®. VMware ESX Server software is the VMware Virtualization Layer of Figure 1, allocating shares of physical Dell server resources to the guest VMs. Whereas VMs normally have no access to or knowledge of the physical platform hosting them, a special privileged VM known as the Service Console is used exclusively to manage the VMware ESX Server system software and hardware.

Figure 1 – VMware ESX Server Architecture Overview

VMware ESX Server software is most often used to help reduce hardware costs by consolidating many server systems (OS and application stacks) onto one physical system. Natural questions to ask are:

How many Virtual Machines can be concurrently hosted on a single ESX Server system?

What will be the performance impact of virtualization overhead on my application?

How well will my application scale using virtualization technology?

Dell, Inc. Page 4 June, 2004

Page 5: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

The answers to the questions above depend on many factors, each of which is an existing concept in traditional operating system performance.

2. Understanding the “VMs as Processes” Model The questions posed above have parallels in traditional operating system performance evaluation. Wishing to determine VM-capacity of a VMware ESX Server system is essentially the same as asking “How many processes can run concurrently in a multiprogrammed operating system?” Identifying the overhead of the virtualization layer is similar to understanding the overhead imposed by a traditional OS. The difference is that, in a VMware ESX Server system, the virtualization layer, known as the VMkernel, is the “multiprogrammed operating system,” and each guest VM is a “process.” The fact that each VM is itself a hosted operating system with its own set of supported processes is largely immaterial; each VM is effectively a single process from the perspective of the VMkernel. Using the “VMs as processes” model, it is straightforward to proceed with traditional performance evaluation and sizing experiments. The traditional variables still apply, some with different names that reflect the virtualized nature of the system:

Hardware Capacity: Things such as the number and nature of the processors, the amount of memory, the size and speed of the disk subsystem, and the speeds and protocols used in the network interfaces are critical to understanding overall performance and capacity of any hardware/software solution.

Virtualization Layer Overhead: Like traditional OSs, the virtualization layer consumes physical resources by the mere fact of its existence. This “footprint” of the VMkernel and its attendant Service Console has two components: a “global” component that exists independently of the number of hosted VMs, and a “per-VM” overhead, representing additional resources consumed by the virtualization layer in order to support each additional VM. Similarly to traditional OSs, these overhead components are not necessarily constant, and may vary dependent upon other factors such as overall system load and user workload characterization. Like traditional OSs, the VMkernel has tunable parameters that allow some degree of control over its own resource consumption.

Virtual Machine Characteristics: Think of virtual machines as the “processes” managed by the VMkernel. Traditional processes have many characteristics that can be controlled independently of the code the process executes, such as the security context, memory limits, disk quotas, etc. Similarly, VMs can be configured in many different ways, regardless of the guest OS they will be hosting. Configurable VM parameters include memory size, number of processors, number and type of virtual disk and network controllers, resource limitations imposed by the VMkernel (through implementation of administrative policy), and virtual Basic Input/Output System (BIOS) settings. These choices, many made when the VM is created but before the guest operating system is installed, can have a profound impact on the VM’s performance.

Guest Operating System Characteristics: Think of the guest operating system as the “program” being run by the VM. Just as different programs have different performance profiles, so do different guest operating systems. Much work has been done across the industry to determine optimal

Dell, Inc. Page 5 June, 2004

Page 6: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

configurations of these operating systems in traditional physical hardware settings.

User Workload Characteristics: Traditionally, processes execute programs that accept input, perform computation, and produce output. In the world of virtualization, a guest OS and its user applications constitute a VM process with inputs, computations, and outputs. Workloads are typically described with respect to the resource bottleneck; they can be “I/O bound,” “CPU bound,” or “balanced.” All of these concepts apply to VM processes, just as they would to traditional processes.

3. Understanding VMware ESX Server Global Memory Management

VMware ESX Server software has a three-tiered, progressive approach to managing memory across all virtual machines. These mechanisms work in concert to optimize memory utilization and provide for graceful degradation under sever loading conditions. These memory management techniques are completely secure: in no case can one virtual machine access the memory of a different virtual machine.

Page Sharing: This feature conserves memory in much the same way as shared code segments and libraries do in traditional operating systems. If more than one VM have separate but identical memory pages, VMware ESX Server software can remove all but one copy of the page, and share that copy between the VMs. Sharable pages are identified by a background scanning process that computes hash values for every page in the system, then performs a strict compare for pages with identical hash values. Shared pages are further managed using a copy-on-write policy, in the event that one of the VMs sharing the page makes a change to it. Although page sharing is especially effective when VMs are running the same operating system, it takes time for the vmkernel to find and consolidate sharable pages. Because the scanning rate for shareable pages is low, this feature is designed to have minimal impact on overall machine performance.

Memory Ballooning: The memory management mechanisms of individual guest operating systems could potentially have a negative impact on the VMware ESX Server global memory management efforts. These undesirable effects occur because the two memory management systems ordinarily do not communicate and may work towards contrary goals. Memory ballooning is a mechanism that enables indirect communication between the two by installing a special balloon driver in the guest OS. This driver is aware of the VMware ESX Server software environment, and indirectly communicates with the vmkernel by changing the amount of memory pressure within the guest. The balloon driver can be told to either inflate or deflate. When inflated, the balloon driver requests memory pages from its guest OS, just as any other kernel-mode driver would. These “captured” pages are then made available to the vmkernel for use by other VMs.

Dell, Inc. Page 6 June, 2004

Page 7: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Paging: As a final mechanism to support memory oversubscription, the vmkernel will perform paging on memory at a global level. This paging is completely transparent to the guest operating system. This is an undesirable situation indicating a lack of physical memory in the system and should be avoided.

4. Testing and Characterization Methodology A thorough discussion of VMware ESX Server performance characteristics is complex, and is beyond the scope of this document. However, it is possible to draw basic conclusions and begin to build a picture of the performance characteristics of the technology by starting with simple scenarios and targeting a limited number of data points that have traditionally proven to be the most critical in system performance studies. The goal of this paper is to present a typical performance study in which we choose a particular hardware platform, a particular configuration of VMware ESX Server software, a particular mix of guest operating systems and applications, and a particular set of inputs to those applications. Although the detailed results are valid only for this instantiation of hardware, software and workload, there is also broad applicability through the development of:

Methodology: By understanding how this study was conducted and how the results were interpreted, performance practitioners can conduct similar studies using the same methodology for alternate configurations – in particular, for other workloads and software configurations.

General Conclusions: Since the performance of applications running on

VMware ESX Server systems is highly workload dependent, no unified formula is likely to emerge that encompasses all workloads and configurations. Instead, heuristics can be discovered that help to guide a sizing and characterization effort for particular workloads, following the methodology used in this study.

The mechanism used to gather metrics is central to the success of any performance study. VMware provides performance monitoring of the VMkernel and the individual hosted VMs through the VMware Management Interface (VMI). The VMI is accessed via a Web browser and provides both physical and virtual resource usage information for purposes of performance study. Similar performance data is available via VMware VirtualCenter.

Dell, Inc. Page 7 June, 2004

Page 8: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

5. Study Design

PE 2650Load Generator

PE 6650System Under Test

ServiceConsole

VM

Windows Server

SQL Server

VM

Windows Server

SQL Server

VM

Windows Server

SQL Server

VM

Windows Server

SQL Server

SQL Load Generator

(DBHammer)

SQL Load Generator

(DBHammer)

SQL Load Generator

(DBHammer)

SQL Load Generator

(DBHammer)

Figure 2 – Performance Test Bed Overview

Figure 2 shows an overview of the test environment. Each identical VM hosts a database server, and is paired with a dedicated, external workload generator. The workload generator presents a “moderate” workload to its peer server. The goal is to determine, under various conditions such as different hardware resources, VM configurations, etc., how many VMs can be concurrently hosted by the System Under Test (SUT). A Dell PowerEdge™ 2650 system loaded with Microsoft Windows 2000 Advanced Server was used as the workload generation machine on a network sub-net shared with the SUT. A Dell PowerEdge 1750 (not shown in the figure) served as the VMware management server, loaded with Windows 2003 Server running the VMware VirtualCenter 1.0.1 management component. VirtualCenter provided for the mass cloning of VMs. Table 1 shows the detailed configuration of the test bed hardware and software components.

Dell, Inc. Page 8 June, 2004

Page 9: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Hardware 1 Dell PowerEdge 6650 system 4, 16, or 32GB RAM 4 2.8 GHz Intel® Xeon™ MP CPUs with 2MB L3 cache 2 Broadcom® NetXtreme™ BCM5700 NICs (embedded) 5 15K PM 73GB disks configured at RAID 5

System Under Test (SUT)

Software VMware ESX Server software Version 2.1 Service Console memory setting 512 MB

Virtual Hardware

Processors: 1 or 2; Memory: 0.5, 1 or 2 GB. NIC = vmxnet

Virtual Machines (VM)

Software Microsoft Windows 2000 Advanced Server, SP3 Microsoft SQL Server 2000, SP3a

Hardware 1 Dell PowerEdge 2650 system 2GB RAM 1 10K 136GB disk

Load Generator

Software Microsoft Windows 2000 Advanced Server DBHammer1

Hardware 1 Dell PowerEdge 1750 system 2GB RAM 1 10K RPM 136GB disk

Systems Management

Software Windows Server 2003™ Enterprise Edition VMware VirtualCenter™ 1.0.1 software

Network Switch

Hardware Dell PowerConnect 5224 Switch

Table 1 – Test Bed Configuration Details

To effectively design and manage the scope of the test, it is necessary to understand and constrain the test parameter space. First, all parameters in the test environment must be identified, and then the “parameters of interest” must be identified. These are the parameters that are most likely to affect the outcome of the experiment when their values are changed; hence, these parameters are called variables. All of the other parameters must be invariant for the duration of the test and are called constants. Table 2 shows the list of identified parameters. For constants, an invariant value is shown. For variables, an array of values over which the variable will range during the testing is shown.

1 The Database Hammer (DBHammer) tool is the standard SQL Server benchmark tool included in the Microsoft SQL Server 2000 Resource Kit.

Dell, Inc. Page 9 June, 2004

Page 10: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Name Type Symbol Value(s) Description ClientsPerVM Constant n/a 50 The number of DBHammer clients

running concurrently, assigned to send requests to a specific VM.

ClientThinkTime Constant n/a 100 ms The amount of time each DBHammer client pauses between initiating requests.

PhysicalProcessors Constant n/a 4 The number of physical processors in the SUT.

PhysicalRAM Variable R 4, 16, 32 GB (default = 16)

The amount of physical memory in the SUT.

NumberOfVMs Variable X 1, 2, 3, 4, 6, 8, … The number of VMs (and associated workload generators) running concurrently on the SUT. This value is plotted on the X axis of performance graphs and is increased (possibly beyond 8) until Y no longer increases, or the workload generation system is saturated.

VMMemorySize Variable M 0.25, 1, 2 GB (default = 1)

The amount of virtual memory assigned to each VM.

ProcessorsPerVM Variable P 1, 2 (default = 2)

The number of virtual SMP processors allocated to each VM.

TransactionsPerSecond Output Y Observed The total number of transactions processed by all VMs on the SUT after the system reaches steady state. This value is plotted on the Y axis of performance graphs.

Table 2 – Study Parameters

Even with only four variables ranging over relatively small value sets, the number of data points to capture for all combinations of values is still too large for a tractable study. We therefore define the concept of nth order variations. Each graph to be generated will plot points at an (X,Y) coordinate, where X is the number of concurrent VMs, and Y is the aggregate transaction throughput. Therefore, X will always be allowed to range, and Y’s value will be observed. For a first order variation study, only n=1 of the other three variables will be allowed to range. The other two revert to constants by defaulting to a predetermined constant value. For a second order variation study, any n=2 variables are allowed to range in all possible combinations, etc. This study is a first order variation study only. In this paper with focus on memory sizing effects, two graphs were generated showing the effects of changing:

Virtual machine memory size (M) Physical machine memory size (R)

6. Results: VM Memory Allocation (M) Figure 3 shows the effect of the virtual machine’s memory sizing option in our test.

Dell, Inc. Page 10 June, 2004

Page 11: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Figure 3 – Effect of VM Memory Size

Figure 3

clearly shows a decrease in throughput for the 0.25GB (256MB) run after the 2nd VM was added. This implies that 0.25GB is too small for the SQL Server to efficiently handle the requested workload; the guest OS has begun paging its own virtual memory because of the constraint. Performance scales linearly with the larger memory sizes (M = 1.0, 2.0GB), until there are X=10 concurrent VMs. At this point, a somewhat counterintuitive inflection point was encountered: the system in which VMs are given the most memory (2.0GB) began to suffer degraded performance while the 1.0GB run continued to see increased performance until X=14. This implies that, for this software configuration and workload, each individual VM optimally requires somewhere between 0.25GB and 2.0GB of memory – anything more will not improve each VM’s individual performance. In fact, giving too much memory to earlier VMs means there is not enough memory to add more VMs later. Although the VMware ESX Server software’s page sharing and memory ballooning mechanisms help to optimize memory on a global scale, it cannot fully make up for the poor sizing choice. From the VMI, it is clear that at X = 12, VMware ESX Server software has already engaged the paging mechanism for the M = 2.0GB cases (Balloon Driver = 8.5GB, Swapped = 63.5MB). This gives rise to the first rule of thumb: Heuristic 1: For moderate workloads with low-to-moderate CPU and I/O utilizations, correctly sizing the memory of each VM is critical to maximizing the number of hosted VMs without degrading cumulative performance. This should generally hold true for light to moderate workloads where neither CPU nor I/O are bottlenecks. Even though the workload in this study is marginally I/O bound, it does not saturate the I/O subsystems. Such workloads are ideal for hosting with virtual machines, but memory sizing must be done properly to maximize the benefits of server consolidation. Determining the optimal value for M (denoted

Dell, Inc. Page 11 June, 2004

Page 12: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

hereafter as M’) can be accomplished by running a single VM and observing the memory usage statistics from within the VM, as well as in the VMI. For example, for the software stack used in this study, Windows Tasks Manager (Performance tab) can be used while the proposed workload is running for sufficient time, say 24 or 36 hours. And from the VMI, the private memory allocated for this VM needs to be observed. Once M’ is known, and if all VMs are identical, then the maximum number of VMs (X) that can be concurrently hosted before memory ballooning is invoked can be approximated by the following inequality:

Formula 1: VO

VMKSC

MMMMRX

+−−

≤'

Where: X is the number of concurrent VMs R is the total physical RAM on the system M’ is the optimal memory size for each VM MSC is the amount of memory allocated to the Service Console MVMK is the memory overhead of the VMkernel MVO is the virtualization overhead for each VM

The inequality makes sense: Given the system’s physical RAM, first deduct the memory allocated to the Service Console and further deduct the amount of memory required by the VMkernel itself. Next, divide the difference by how much memory each hosted VM will require, which is the sum of its optimal virtual memory allocation and the amount of memory required to support the virtualization overhead. The dividend is X, the maximum number of VMs. Formula 1 is actually a worst case upper bound. In practice, the maximum value of X could be higher depending on the effect of VMware ESX Server software’s page sharing mechanism. Although sharing can have a pronounced effect, especially in circumstances such as this example scenario where the VMs are similar, Formula 1 is a safe, first approximation of the maximum number of VMs than can be hosted without over-subscribing memory. The value of X is computed by the inequality, and the values of R and M’ have already been described. MSC is determined through configuration of the VMware ESX Server software, typically at installation. Dell recommends 512MB, the value used in this study. MVMK for the 2.1 version of VMware ESX Server software has been determined by VMware to be 36MB. The value of MVO depends on two additional parameters: M’, the amount of memory allocated to the VM, and P, the number of virtual processors allocated to the VM. Dual virtual processors (P=2) requires 10MB additional memory per VM. VMs with over 512MB of requested memory (M’>512) require an additional 10MB plus 32KB for each MB over 512. Therefore:

otherwise

where

)512'(032.0)1(1064 512' )1(1054

−+−+≤−+

=

MPMP

MVO

Dell, Inc. Page 12 June, 2004

Page 13: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Therefore, for this particular graph, if we assume R=16GB=16384MB, P=2, MSC=512MB, MVMK=36MB, and that M’=1GB=1024MB, then the maximum value of X is computed thus:

141114

15836)5121024(032.0)12(10641024

3651216384)512'(032.0)1(1064'

'

−+−++−−

−+−++−−

+−−

MPMMMR

MMMMRX

VMKSC

VO

VMKSC

Which is corroborated by the graph in . Figure 3

7. Results: Physical Memory Size (R) Figure 4

Figure 4 – Physical RAM effects

shows the effect of the amount of physical RAM on the number of VMs that can be hosted. The graph shows results for systems with 4GB, 16GB, and 32GB of physical memory running VMs with 1GB of virtual memory and the same workload as described in Section 5.

The maximum values of Y on the 4GB and 16GB plots occur at the values of X predicted by Formula 1. For the 32GB plot, however, there appears to be a point of diminishing return around X=18.

Dell, Inc. Page 13 June, 2004

Page 14: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Although Formula 1 implies there is memory for additional VMs, throughput either remained flat or may even have been beginning to decrease. The inflection point for the 32GB plot is obviously to the right of X=18. This premature flattening of the curve, though not surprising, bears further investigation; it could be due to virtualization overhead, saturation of the workload generator2, or effects of other workload-dependent bottlenecks such as the network, disk subsystem, or CPU. Regardless, these results are consistent with the worst-case maximum value given in Formula 1 and give rise to three additional heuristics: Heuristic 2: For moderate workloads with no pronounced CPU or IO bottlenecks, the number of VMs that can be hosted by a single system is largely dependent upon the amount of physical RAM. Heuristic 3: Over-subscribing memory – that is, relying on the “page sharing”, “memory ballooning” and “memory paging” features of the virtualization layer to stretch memory resources across a larger group of VMs than can be optimally hosted – is not a good choice for building virtual environments where maximum throughput performance is the dominant goal.

8. Conclusion This paper evaluated the throughput characteristics of VMware ESX Server 2.1 software using a workload representative of “moderately” loaded database servers. This workload is not characterized as particularly IO or CPU bound, and was intended to demonstrate the maximum number of moderately loaded VMs that can be simultaneously hosted on a single PowerEdge 6650 system with various memory and processor configurations. The results are useful for understanding basic sizing guidelines when CPU and IO are not saturated bottlenecks. In summary, the following guidelines are reasonable in sizing a VMware ESX Server system:

Physical memory is critical. For the PowerEdge 6650 in our testing, 4GB was typically too little; 32GB provided maximum flexibility and imposed no performance penalties, but it may be underutilized for some workloads and does not guarantee more productive VMs than can be accommodated with less memory; 16GB is recommended.

The maximum number of VMs, particularly when moderately or lightly loaded, is dependent primarily upon available physical RAM, but there is a point of diminishing returns that is workload dependent. For the moderate workload investigated in this study on PowerEdge 6650, the maximum number of productive VMs was approximately 20.

To optimize use of system resources and maximize throughput, one must understand resource profiles of each VM. Memory requirements of a VM can be determined by running the VM by itself and observing memory usage metrics from within the VM itself (such as with the Windows Tasks Manager) and in the VMware Management Interface. Formula 1 of this paper can be

2 Because the DBhammer tool was unable to create enough client instances to test beyond X=18, it could not be determined if the generator had reached its capacity, or encountered a programmatic limitation.

Dell, Inc. Page 14 June, 2004

Page 15: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

used to determine the maximum number of VMs that can be hosted without triggering memory ballooning.

Concurrently hosting VMs with the same software stack can be beneficial as VMware ESX Server’s memory sharing mechanism can reduce the overall memory requirements for VMs that have common code pages (e.g. running multiple Windows 2003 virtual machines).

Avoid memory over-subscription in production environments where cumulative performance is the primary goal. Memory ballooning should be thought of as way to gracefully manage system memory during unanticipated peak demand periods.

9. Future Work Additional work is required to understand how VMware ESX Server software affects other workloads. The following questions are interesting, and the experiments outlined in this paper could be revised to investigate further:

What is different for CPU-intensive workloads? How beneficial is virtual SMP? It is hypothesized that CPU-intensive workloads are better candidates for

virtualization than IO-intensive workloads. Is this really the case? Does mixing CPU-bound and IO-bound workloads further increase system

utilization and throughput? How profound is the effect of memory sharing? Does mixing workloads (to optimize resource usage) but not mixing software

stacks (to leverage memory sharing) lead to even further improvements? What are some specific values for X (max number of VMs) for particular

hardware configurations with well-known enterprise applications?

Dell, Inc. Page 15 June, 2004

Page 16: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

10. Additional Resources Complete and current documentation for Dell supported VMware configurations is available at http://www.dell.com/vmware Related Documents

1. VMware ESX Server 2.1 Software for Dell PowerEdge Server Deployment Guide 2. Dell VMware ESX Server 2.1 Software Backup and Recovery Guide, Version 1.0

Additional Web Information

1. Dell products: www.dell.com: see servers, then product literature 2. EMC Support Matrix (ESM) at http://www.emc.com/horizontal/interoperability/ 3. VMware Virtual Center 1.0 at

• Features: http://www.vmware.com/products/vmanage/vc_features.html • Documents: http://www.vmware.com/support/vc/ • Troubleshooting: http://www.vmware.com/support/vc/doc/releasenotes_vc.html

4. VMware ESX 2.1 at • Features: http://www.vmware.com/products/server/esx_features.html • Documentation: http://www.vmware.com/support/esx21 includes Install Guide, Admin

Guide, Scripting Guide, etc. • Troubleshooting:http://www.vmware.com/support/esx21/doc/releasenotes_esx21.html install,

configure, guest OS (also see the KnowledgeBase) 5. VMware compatibility guides for systems, IO, and SANs:

• http://www.vmware.com/pdf/esx2_system_guide.pdf • http://www.vmware.com/pdf/esx_IO_guide.pdf • http://www.vmware.com/pdf/esx_SAN_guide.pdf

Technical Support Resources 1. Dell-specific VMWare information at http://www.dell.com/vmware/ 2. Link for Dell support and professional services 3. VMware support website at http://www.vmware.com

VMware Newsgroups at news.vmware.com

THIS PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

Information in this document is subject to change without notice.

Trademarks used in this text: Dell, the DELL logo, and PowerEdge are trademarks of Dell Inc. VMware is a registered trademark of VMware, Inc. EMC is a registered trademark of EMC Corporation. Linux is a registered trademark of Linus Torvalds. Netware is a registered trademark of Novell Inc. Intel, Xeon, and Pentium are registered trademarks of Intel Corporation. Microsoft and Windows are registered trademarks of Microsoft Corporation. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

Dell, Inc. Page 16 June, 2004

Page 17: VMware ESX 2 - Dell · the scanning rate for shareable pages is low, this feature is designed to have ... Paging: As a final mechanism to support memory oversubscription, the vmkernel

Dell, Inc. Page 17 June, 2004

© 2004 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. For more information contact Dell.

Portions copyrighted ©2004 VMware, Inc.