VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

56
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro An Architectural Overview Abstract This white paper describes the design, deployment, and validation of a virtualized application environment incorporating VMware vSphere, Oracle E-Business Suite Release 12, SAP, Microsoft SharePoint 2007 and SQL Server 2008 on-line transaction processing (OLTP) on virtualized storage presented by EMC ® VPLEX Metro. May 2010

Transcript of VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

Page 1: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP

Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro

An Architectural Overview

Abstract

This white paper describes the design, deployment, and validation of a virtualized application environment incorporating VMware vSphere, Oracle E-Business Suite Release 12, SAP, Microsoft SharePoint 2007 and SQL Server 2008 on-line transaction processing (OLTP) on virtualized storage presented by EMC® VPLEX™ Metro.

May 2010

Page 2: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 2

Copyright © 2010 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com

All other trademarks used herein are the property of their respective owners.

Part number: H6983

Page 3: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 3

Table of Contents

Executive summary ........................................................................................................................... 6 Business case ............................................................................................................................... 6 Product overview ........................................................................................................................... 6 Key results ..................................................................................................................................... 8

Introduction ....................................................................................................................................... 9 Introduction to this white paper ..................................................................................................... 9 Purpose ......................................................................................................................................... 9 Scope .......................................................................................................................................... 10 Audience ..................................................................................................................................... 10 Terminology ................................................................................................................................. 10

Configuration ................................................................................................................................... 12 Overview ..................................................................................................................................... 12 Physical environment .................................................................................................................. 12 Hardware resources .................................................................................................................... 13 Software resources ..................................................................................................................... 13

Common elements in this distributed virtualized data center test environment .............................. 14 Introduction to the common elements ......................................................................................... 14 Contents ...................................................................................................................................... 14

VMware vSphere ............................................................................................................................. 15 VMware vSphere overview ......................................................................................................... 15 VMware vSphere configuration ................................................................................................... 15

EMC Symmetrix VMAX ................................................................................................................... 18 EMC Symmetrix VMAX overview ................................................................................................ 18 EMC Symmetrix .......................................................................................................................... 18 VMAX configuration .................................................................................................................... 18

EMC CLARiiON CX4-480 ............................................................................................................... 19 EMC CLARiiON CX4-480 overview ............................................................................................ 19 EMC CLARiiON CX4-480 configuration ...................................................................................... 19

VCE Vblock 1 .................................................................................................................................. 21 VCE Vblock 1 overview ............................................................................................................... 21 VCE Vblock 1 configuration ........................................................................................................ 21

VPLEX Metro .................................................................................................................................. 22 VPLEX Metro overview ............................................................................................................... 22 SAN design for VPLEX Metro ..................................................................................................... 23 VPLEX Metro features for storage usage ................................................................................... 23 Storage best practice – partition alignment ................................................................................. 23 Distributed mirroring – DR1 device ............................................................................................. 23 VPLEX Metro back-end zoning ................................................................................................... 24

Page 4: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 4

VPLEX Metro front-end zoning ................................................................................................... 24 VPLEX Metro WAN connectivity ................................................................................................. 24 Migration to VPLEX Metro using LUN encapsulation – disruptive to host access ..................... 24 Migration to VPLEX Metro using VMware Storage VMotion – nondisruptive to host access ..... 25 Migration to VPLEX Metro DR1 – disruptive to host access ...................................................... 25 Migration from Site A to Site B VPLEX Metro LUN – nondisruptive to host access ................... 26

VPLEX Metro administration ........................................................................................................... 27 Introduction to VPLEX Metro administration ............................................................................... 27 VPLEX Metro administration procedure ...................................................................................... 27

Microsoft Office SharePoint Server 2007 ........................................................................................ 29 Microsoft SharePoint Server 2007 overview ............................................................................... 29

Microsoft SharePoint Server 2007 configuration ............................................................................ 29 Microsoft SharePoint Server 2007 configuration overview ......................................................... 29 Microsoft SharePoint Server 2007 design considerations .......................................................... 29 Microsoft SharePoint Server 2007 farm virtual machine configurations ..................................... 30 Virtual machine configuration and resource allocation ............................................................... 31 Testing approach—SharePoint farm user load profile ................................................................ 32

Validation of the virtualized SharePoint Server 2007 environment................................................. 33 Test summary .............................................................................................................................. 33 Validation without encapsulation to VPLEX ................................................................................ 33 Validation with VMotion running between local and remote sites ............................................... 34 Validation of cross-site VMotion .................................................................................................. 35

Microsoft SQL Server 2008 ............................................................................................................. 36 Microsoft SQL Server 2008 overview.......................................................................................... 36

Microsoft SQL Server 2008 configuration ....................................................................................... 36 Design considerations ................................................................................................................. 36 SQL Server test application ........................................................................................................ 36 OLTP workloads .......................................................................................................................... 36 Key components of SQL Server testing ...................................................................................... 36 Partitioning the SQL database .................................................................................................... 37 Broker and customer file groups partitioning .............................................................................. 37 Broker and customer file groups ................................................................................................. 37

Validation of the virtualized SQL Server 2008 environment ........................................................... 39 Test summary .............................................................................................................................. 39 Validation prior to encapsulation ................................................................................................. 39 Validation after encapsulation ..................................................................................................... 39 Validation of cross-site VMotion .................................................................................................. 40

SAP ................................................................................................................................................. 41 SAP overview .............................................................................................................................. 41

SAP configuration ........................................................................................................................... 41 SAP ERP 6.0 ............................................................................................................................... 41

Page 5: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 5

SAP BW 7.0 ................................................................................................................................ 41 Business scenario ....................................................................................................................... 41 Design considerations ................................................................................................................. 42

Validation of the virtualized SAP environment ................................................................................ 44 Test objectives ............................................................................................................................ 44 Test scenario ............................................................................................................................... 44 Test procedure ............................................................................................................................ 44 Test results .................................................................................................................................. 45

Oracle .............................................................................................................................................. 46 Oracle overview ........................................................................................................................... 46

Oracle configuration ........................................................................................................................ 46 Configuration of the Oracle E-Business Suite environment ....................................................... 46 Design considerations ................................................................................................................. 47 Oracle E-Business Suite Database Server ................................................................................ 48 Oracle E-Business Suite Application Servers 1 and 2 ............................................................... 49 Oracle E-Business Suite Infrastructure Server .......................................................................... 49

Validation of the virtualized Oracle environment ............................................................................ 50 Tuning and baseline tests ........................................................................................................... 50 Baseline test ................................................................................................................................ 51 Encapsulate RDM (Raw Device Mapping) to vStorage .............................................................. 51 VMotion migration test ................................................................................................................ 52 100 km distance simulation for FC .............................................................................................. 53 Batch process test ....................................................................................................................... 54

Conclusion ...................................................................................................................................... 55 Summary ..................................................................................................................................... 55 Findings ....................................................................................................................................... 55 Next steps ................................................................................................................................... 55

References ...................................................................................................................................... 56 White papers ............................................................................................................................... 56 Product documentation ............................................................................................................... 56 Other documentation ................................................................................................................... 56

Page 6: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 6

Executive summary

Business case As companies increasingly realize the great business and technical benefits of

virtualizing their servers and applications, they are looking to apply the same model to their storage systems as well. Server virtualization allows hardware resources to be pooled into resource groups and dynamically allocated for application workloads, providing a flexible and fluid infrastructure. Storage, too, must evolve beyond a point of simple consolidation into virtual storage, which allows storage resources to be aggregated and virtualized to provide a dynamic storage infrastructure to complement the dynamic virtual server infrastructure. EMC delivers a virtual storage solution, which builds on fully automated storage tiering to address the need for mobility and flexibility in the underlying storage infrastructure. The way that this is addressed is through federation—delivering cooperating pools of storage resources. Federation enables IT to quickly and efficiently support the business through pools of resources that can be dynamically allocated. This flexibility elevates the value IT offers within the business, as application and data movement is possible for better support of services. Together, cooperating pools of server applications and storage enable a new model of computing—IT as a service. To proactively avoid potential disaster threats such as forecasted weather events, IT departments must overcome the challenges that storage virtualization introduces with distance. Up until now it has been impossible to accomplish this without relying on array replication between the data center locations and a site failover process.

Product overview

EMC® VPLEX™ Metro enables disparate storage arrays at two separate locations to appear as a single, shared array to application hosts, allowing for the easy migration and planned relocation of application servers and application data, whether physical or virtual, within and/or between data centers across distances of up to 100 km.

VPLEX Metro enables companies to ensure effective information distribution by sharing and pooling storage resources across multiple hosts over synchronous distances.

VPLEX Metro empowers companies with new ways to manage their virtual environment over synchronous distances so they can:

• Transparently share and balance resources across physical data centers

• Ensure instant, realtime data access for remote users

• Increase protection to reduce unplanned application outages

Transparently share and balance resources within and across physical data centers Using VPLEX Metro, IT departments can migrate and relocate virtual machines, applications, and data within, across, and between data centers. VPLEX Metro works in conjunction with VMware VMotion and Storage VMotion to:

• Enable administrators to use standard management tools to easily distribute running applications between two sites, making it easy to load balance operations

Page 7: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 7

• Transparently move running applications and data between sites, eliminating service disruption during scheduled downtime events

• Easily add or remove storage so that the actual location of the data on a single array becomes much less important. With virtual storage, incorporating new storage systems into the IT environment is faster and simpler

• Accelerate private cloud deployment by creating a seamless, multi-site storage layer that can easily be hosted onsite or shifted to a hosting provider

Ensure instant, realtime data access for remote users Using VPLEX Metro, data is distributed and access is shared across sites, enabling IT environments to:

• Provide concurrent read and write access to data by multiple hosts across two locations

• Provide realtime data access to remote physical data centers without local storage

• Share storage in geographically-dispersed environments up to 100 km apart

Increase protection to reduce unplanned application outages

Using VPLEX Metro, IT can increase high availability and workload resiliency across sites, while also proactively avoiding potential disaster threats such as forecasted weather events.

• VPLEX Metro ensures continuous data access at each site or cluster in the event of a component failure within each VPLEX cluster with an n+1 cluster architecture and provides heterogeneous storage mirroring between array types and the Virtual Computing Environment coalition (VCE) Vblock 1. For more information about supported arrays, refer to the EMC Support Matrix.

• Combined with VMware VMotion between geographically-dispersed VMware clusters, VPLEX Metro enables IT to respond to a potential threat before it becomes a disaster, proactively by moving workloads nondisruptively from one site to another.

Page 8: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 8

Key results This solution enabled by VPLEX Metro solves a major IT challenge in a way that

could not have been easily achieved before now. Traditionally, customers were tasked with the challenge of migrating data and applications between geographically-dispersed data centers through a series of manual tasks and activities. Customers would either make physical backups or use data replication services to transfer application data to the alternate site. Applications had to be stopped and could not be restarted until testing and verification was complete. With VPLEX Metro, these migration challenges can be resolved quickly and easily. Once the distributed RAID 1 device (DR1) is established, applications can be started immediately at the remote site, even before all the data has been copied over.

VPLEX Metro provides companies with a more effective way of managing their virtual storage environments, by enabling transparent integration with existing applications and infrastructure, and providing the ability to migrate data between remote data centers with no interruption in service.

With VPLEX Metro leveraged in this solution, companies can:

• Easily migrate applications in real time from one site to another with no downtime or disruption, using standard infrastructure tools such as VMware VMotion and Storage VMotion.

• Provide an application-transparent and nondisruptive solution for disaster avoidance and data migration, so reducing the operational impact of more traditional solutions, such as tape backup and data replication, from days or weeks, to minutes and hours.

• Transparently share and balance resources between geographically-dispersed data centers with standard infrastructure tools.

Page 9: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 9

Introduction

Introduction to this white paper

This white paper begins by briefly describing the technology and components used in the environment. Next, the white paper discusses the common elements that supported this distributed virtualized data center test environment. The document goes on to outline the configuration of the Microsoft SharePoint, SQL, SAP, and Oracle applications used in this solution. The white paper closes by summarizing the testing methodology and validated results.

This white paper includes the following sections:

Topic See Page

Configuration 12

Common elements in this distributed virtualized data center test environment

14

VMware vSphere 15

EMC Symmetrix VMAX 18

EMC CLARiiON CX4-480 19

VCE Vblock 1 21

VPLEX Metro 22

VPLEX Metro administration 27

Microsoft Office SharePoint Server 2007 29

Microsoft SQL Server 2008 36

SAP 41

Oracle 46

Conclusion 55

References 56

Purpose The purpose of this document is to provide readers with an overall understanding of

the VPLEX Metro technology and how it can be used with tools such as VMware VMotion and Storage VMotion to provide effective resource distribution and sharing between data centers across distances of up to 100 km with no downtime or disruption.

Page 10: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 10

Scope The scope of this white paper is to document the:

• Environment configuration for multiple applications utilizing virtualized storage presented by EMC VPLEX Metro

• Migration from directly-accessed, SAN-attached storage to a virtualized storage environment presented by EMC VPLEX Metro

• Application functionality within a geographically-dispersed VPLEX Metro virtualized storage environment

Audience This white paper is intended for:

• Field personnel who are tasked with implementing a multi-application virtualized data center utilizing VPLEX Metro as the local and distributed federation platform

• Customers, including IT planners, storage architects, and administrators involved in evaluating, acquiring, managing, operating, or designing an EMC multi-application virtualized data center

• EMC staff and partners, for guidance and the development of proposals

Terminology The following table defines terms used in this document.

Term Definition

CNA Converged Network Adapter

COM Communication—identifies inter- and intra-cluster communication links

DR Disaster Recovery

FCoE Fibre Channel over Ethernet

HA High Availability

Metro-Plex Multiple clusters connected within metropolitan area network (MAN) distances—for example, the same building, site, or campus with a maximum distance of 100 km apart

OATS Oracle Application Testing Suite server

OLTP On-line transaction processing

SAP ABAP SAP Advanced Business Application Programming

SAP BI SAP Business Intelligence

SAP CI SAP Central Instance

SAP ERP SAP Enterprise Resource Planning

UCS Cisco Unified Computing System

Page 11: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 11

VCE

Virtual Computing Environment coalition, consisting of Cisco and EMC, with VMware, that represents an unprecedented level of joint collaboration, services, and partner enablement, which “de-risks” the infrastructure virtualization journey to the private cloud.

VM Virtual Machine. A software implementation of a machine that executes programs like a physical machine.

VPLEX Metro

Provides distributed federation within, across and between two clusters (within synchronous distances)

VMDK Virtual Machine Disk format. A VMDK file stores the contents of a virtual machine's hard disk drive. The file can be accessed in the same way as a physical hard disk.

Page 12: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

Configuration

Overview The following section identifies and briefly describes the technology and components

used in the environment.

Physical environment

The following diagram illustrates the overall physical architecture of the environment.

SAN switch SAN switch

Fibre Channel

GEN-001298

Ethernet

EMC Symmetrix VMAXEMC CLARiiON CX4

SAP

SAP BI databaseSAP BI CI

Virtual machinesSAP

SAP ERP databaseSAP ERP CI

Virtual machines

SP ApplicationWeb front end-01SQLOracle

Virtual machines

SQL-02Web front end-03SP Excel ServerOracle

Virtual machines

SAP BI databaseSAP BI CI

Virtual machines

SAP ERP databaseSAP ERP CI

Virtual machines

SQL-01Web front end-02Index

Virtual machines

TCP/IP network

Data Center A Data Center B

Fibre Channel

Virtualized LUNs

Vblock 1 withEMC CLARiiON CX4

EMC VPLEX Metro EMC VPLEX Metro

BI: Business Intelligence CI: Central InstanceERP: Enterprise Resource Planning

VPLEX Management

Network

Note: EMC VPLEX Metro back-end storage is provided by the Vblock above.

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 12

Page 13: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 13

Hardware resources

The hardware used to validate the solution is listed in the following table.

Equipment Quantity Configuration

Intel x86-based servers 5 Quad CPU, 96 GB RAM, Dual 10 GB Converged Network Adapters (CNAs)

VCE Vblock 1 1 Cisco Unified Computing System (UCS), Cisco Nexus 6120 switches, EMC CLARiiON CX4

EMC Symmetrix VMAX™ 1 Fibre Channel (FC) connectivity, 450 GB/15k FC drive

EMC CLARiiON® CX4-480 1 FC connectivity, 450 GB/15k FC drive

EMC VPLEX Metro 2 VPLEX Metro storage clusters, dual-engine, 4-director midsize configuration

WAN Emulator 1 1 GbE, 100 km distance

Fibre Channel SAN distance emulator

1 1/2/4 GB FC, 100 km distance

Software resources

The software used to validate the solution is listed in the following table.

Software Version

VMware vSphere 4.0 U1 Enterprise plus Build 208167

VMware vCenter 4.0 U1 Build 186498

EMC PowerPath®/VE 5.4.1 Build 33

Red Hat Enterprise Linux 5.3

DB2 9.1 for Linux, UNIX, and Windows

Microsoft Windows 2008 R2 (Enterprise Edition)

Microsoft SQL Server 2008

Microsoft Office SharePoint Server 2007 (SP1 and cumulative updates)

Microsoft Visual Studio Test Suite 2008

KnowledgeLake Document Loader 1.1

Microsoft TPCE BenchCraft kit MSTPCE 1.9.0-1018

SAP Enterprise Resource Planning 6.0

SAP Business Warehouse 7.0

Oracle E-Business Suite 12.1.1

Oracle RDBMS 11GR1 11.1.0.7.0

Page 14: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 14

Common elements in this distributed virtualized data center test environment

Introduction to the common elements

The virtualized data center environment described in this white paper was designed and deployed with a shared infrastructure in mind. From server to local and distributed federation to network consolidation, all layers of the environment were shared to create the greatest return on infrastructure investment, while achieving the necessary application requirements for functionality and performance.

Using server virtualization, based on VMware vSphere, Intel x86-based servers were shared across applications and clustered to achieve redundancy and failover capability. VPLEX Metro was utilized to present shared data stores across the physical data center locations, enabling VMotion migration of application virtual machines (VMs) between the physical sites. Physical Site A storage consisted of a Symmetrix VMAX Single Engine (SE) for the SAP environment, and a CLARiiON CX4-480 for the Microsoft and Oracle environments. Vblock 1 was used for the physical Site B data center infrastructure and storage.

Contents This section describes the common elements in this distributed virtualized data

center test environment as listed in the following table.

Topic See Page

VMware vSphere 15

EMC Symmetrix VMAX 18

EMC CLARiiON CX4-480 19

VCE Vblock 1 21

VPLEX Metro 22

VPLEX Metro administration 27

Page 15: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 15

VMware vSphere

VMware vSphere overview

VMware vSphere is the industry’s most reliable platform for data center virtualization of the IT infrastructure. It enables the most scalable and efficient use of the x86 server hardware in a robust, highly-available environment.

VMware ESX Server:

• Abstracts server processor, memory, storage, and networking resources into multiple virtual machines, forming the foundation of the VMware VSphere 4 suite.

• Partitions physical servers into multiple virtual machines. Each virtual machine represents a complete system with processors, memory, networking, storage, and BIOS.

• Shares single server resources across multiple virtual machines and clusters ESX Servers for further sharing of resources.

VMware vSphere configuration

In this solution, VMware vSphere was configured as follows:

• Site A—Microsoft and Oracle application environment

• Site A—SAP application environment

• Site B—Microsoft, Oracle, and SAP application environment

Site A – Microsoft and Oracle application environment The virtual infrastructure at Site A for Microsoft and Oracle consists of the following enterprise-class servers (two in total) running VMware vSphere 4 Update 1:

Part Description

Memory 128 GB RAM

CPUs 4: 6 core, 2.659 GHz X7460 Intel Xeon processors

SAN and network connections

• 2: 10 GB Emulex LightPulse LP21000 CNAs for Fibre Channel and Ethernet connectivity

• 2: Broadcom 5708 GbE adapters

High Availability networking

• 2: 1 Gb/s physical connections for the VMware service console

• 2: physical 10 Gb/s connections on a VLAN for virtual machine application connectivity and VMotion

VMDKs Virtual machine disks were used for the virtual machines’ boot LUNs, as well as the application data LUNs

Page 16: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 16

Site A – SAP application environment The virtual infrastructure at Site A for SAP consists of the following enterprise-class servers (two in total) running VMware vSphere 4 Update 1:

Part Description

Memory 96 GB RAM

CPUs 2: Quad core, 2.792 GHz X5560 Intel Xeon processors

SAN and network connections

• 2: 10 GB Emulex LightPulse LP21000 PCI FCoE CNAs for Fibre Channel and Ethernet connectivity

• 2: Broadcom 5708 GbE adapters

High Availability networking

• 2: 1 Gb/s physical connections for the VMware service console

• 2: physical 10 Gb/s connections on a VLAN for virtual machine application connectivity and VMotion

VMDKs Virtual machine disks were used for the virtual machines’ boot LUNs, as well as the application data LUNs

Site B – Microsoft, Oracle, and SAP application environment The virtual infrastructure at Site B for all applications consists of the following enterprise-class Cisco UCS Blade Servers, as part of Vblock 1, running VMware vSphere 4 Update 1:

Part Description

Memory 48 GB RAM

CPUs 2: Quad core, 2.526 GHz E5540 Intel Xeon processors

SAN and network connections

2: Cisco UCS CNA M71KR-E-Emulex FCoE CNAs for Fibre Channel and Ethernet connectivity

High Availability networking

2: Physical 10 Gb/s connections for virtual machine application connectivity, VMotion, and VMware Service Console

VMDKs Virtual machine disks were used for the virtual machines’ boot LUNs, as well as the application data LUNs

The following image shows the Site A and Site B clusters.

Page 17: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 17

Page 18: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 18

EMC Symmetrix VMAX

EMC Symmetrix VMAX overview

The EMC Symmetrix VMAX series provides an extensive offering of new features and functionality for the next era of high-availability virtual data centers. With advanced levels of data protection and replication, the Symmetrix VMAX system is at the forefront of enterprise storage area network (SAN) technology. Additionally, the Symmetrix VMAX array has the speed, capacity, and efficiency to transparently optimize service levels without compromising its ability to deliver performance on demand. These capabilities are of the greatest value for large virtualized server deployments such as VMware virtual data centers.

The Symmetrix VMAX system is EMC’s high-end storage array that is purpose-built to deliver infrastructure services within the next-generation data center. Built for reliability, availability, and scalability, Symmetrix VMAX uses specialized engines, each of which includes two redundant director modules providing parallel access and replicated copies of all critical data.

Symmetrix VMAX’s Enginuity™ operating system provides several advanced features, such as:

• Auto-provisioning Groups for simplification of storage management

• Virtual Provisioning™ for ease of use and improved capacity utilization

• Virtual LUN technology for nondisruptive mobility between storage tiers

EMC Symmetrix VMAX configuration

The SAP application environment deployed in this solution used a Symmetrix VMAX array for the primary storage at Site A. Boot and Data LUNs were provisioned as detailed in the following table.

Note See the SAP section of this white paper for the breakdown detail of the LUN allocation by virtual machine.

Capacity Number of LUNs RAID type

500 GB 2 RAID 5 (7+1)

250 GB 6 RAID 5 (7+1)

85 GB 8 RAID 5 (7+1)

65 GB 2 RAID 5 (7+1)

32 GB 4 RAID 1/0

All drives were 400 GB 15k FC drives. LUNs were presented from the Symmetrix VMAX through two front-end adapter (FA) directors for redundancy and throughput. After encapsulation into VPLEX Metro, devices of the same size and type were presented as DR1s.

Page 19: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 19

EMC CLARiiON CX4-480

EMC CLARiiON CX4-480 overview

The EMC CLARiiON CX4 series delivers industry-leading innovation in midrange storage with the fourth-generation CLARiiON CX storage platform. The unique combination of flexible, scalable hardware design and advanced software capabilities enables EMC CLARiiON CX4 series systems, powered by Intel Xeon processors, to meet the growing and diverse needs of today’s midsize and large enterprises. Through innovative technologies like Flash drives, UltraFlex™ technology, and CLARiiON Virtual Provisioning, customers can:

• Decrease costs and energy use

• Optimize availability and virtualization

The EMC CLARiiON CX4 model 480 supports up to 256 highly-available, dual-connected hosts and has the capability to scale up to 480 disk drives for a maximum capacity of 939 TB. Delivering up to twice the performance and scaling capacity as the previous CLARiiON generation, CLARiiON CX4 is the leading midrange storage solution to meet a full range of needs, from departmental applications to data-center-class business-critical systems.

EMC CLARiiON CX4-480 configuration

The Oracle and Microsoft application environments deployed in this solution used a CX4-480 array for the primary storage at Site A. Boot and Data LUNs were provisioned as detailed in the following tables.

Note Refer to the Oracle, Microsoft Office SharePoint Server 2007, and Microsoft SQL Server 2008 sections of this white paper for the breakdown detail of the LUN allocation by virtual machine.

SQL/SharePoint Capacity Number of LUNs RAID type

200 GB 2 RAID 5 (4+1)

150 GB 2 RAID 5 (4+1)

125 GB 4 RAID 5 (4+1)

100 GB 16 RAID 5 (4+1)

75 GB 24 RAID 5 (4+1)

50 GB 3 RAID 5 (4+1)

20 GB 12 RAID 1/0

15 GB 4 RAID 5 (4+1)

Page 20: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 20

Oracle Capacity Number of LUNs RAID type

500 GB 1 RAID 5 (4+1)

150 GB 2 RAID 1/0

80 GB 4 RAID 5 (4+1)

50 GB 1 RAID 5 (4+1)

All drives were 400 GB 15k FC drives. LUNs were presented from the CLARiiON CX4-480 through four storage processor (SP) ports for multipathing support (for redundancy and throughput). After encapsulation into the VPLEX Metro, devices of the same size and type were presented as DR1 devices.

Page 21: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VCE Vblock 1

VCE Vblock 1 overview

Vblocks are pre-engineered, tested, and validated units of IT infrastructure that have a defined performance, capacity, and availability profile. Vblocks grew out of an idea to simplify IT infrastructure acquisition, deployment, and operations. While Vblocks are tightly defined to meet specific performance and availability bounds, their value lies in a combination of efficiency, control, and choice.

In Vblock 1, each Cisco UCS chassis contains B-200 series blades, six with 48 GB RAM and two with 96 GB RAM. This offers good price and performance and supports memory-intensive applications, such as in-memory databases within the Vblock definition. Within a Vblock 1, there are no hard disk drives in the B-200 series blades as all boot services and storage are provided by the SAN, which in the case of Vblock 1, is a CX4-480 storage array.

VCE Vblock 1 configuration

A Vblock 1 was used for the computing and storage resources at Site B. This allowed for workload balancing and disaster avoidance failover capabilities for the applications deployed in the use case. Using a standard minimum configuration for Vblock 1, the computer resources were provided by Cisco UCS B-Series Blade Servers and storage resources from the CLARiiON CX4-480. For more information about Vblocks, see the Vblock Infrastructure Packages Reference Architecture.

Note Presenting Vblock storage through VPLEX Metro may reduce certain Vblock management functionality. Consult your EMC representative for information about the potential impact to your Vblock environment.

Four of the 16 blades in Vblock 1 were used in the testing of this environment, as illustrated in the following image.

Two two-node ESX clusters were created at site B: one to host Microsoft and Oracle applications, and one to host the SAP application. The storage provided by the Vblock was sized to duplicate the primary site environment. Devices were configured as part of the DR1 devices created in VPLEX Metro, being paired with the primary site LUNs.

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 21

Page 22: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VPLEX Metro

VPLEX Metro overview

VPLEX Metro is a storage area network-based (SAN) block local and distributed federation solution that allows the physical storage provided by traditional storage arrays to be virtualized, accessed, and managed across the boundaries between data centers. This new form of access, called AccessAnywhere™, removes many of the constraints of the physical data center boundaries and its storage arrays. AccessAnywhere storage allows data to be moved, accessed, and mirrored transparently between data centers, effectively allowing storage and applications to work between data centers as though those physical boundaries were not there.

Traditional SAN-based storage access The following image illustrates traditional SAN-based storage access.

Storage access through a storage virtualization layer

The following image illustrates storage access through a storage virtualization layer.

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview

22

Page 23: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 23

SAN design for he role of the VPLEX Metro in a SAN environment is both as a target and an -end

s

VPLEX Metro Tinitiator. From the host perspective, VPLEX Metro is a target and from the backstorage array perspective, VPLEX Metro is an initiator. If an environment is configured so that all LUNs are presented to the hosts through a VPLEX Metro, thenSAN zoning can be done. In this way, the hosts are in the same SAN as the VPLEX Metro front-end ports, and the storage arrays are in the same SAN as the VPLEX Metro back-end ports. In the case of an environment where hosts need to access thestorage arrays directly, as well as access VPLEX Metro LUNs, for example, in a migration situation, the hosts, the VPLEX Metro front end and back end, and the storage arrays all need to be in the same SAN, so that the hosts can see the LUNfrom both sources.

VPLEX Metro PLEX Metro provides the ability to encapsulate and de-encapsulate existing ty

orage volume that is not a multiple of 4 KB cannot be claimed or

features for storage usage

Vstorage devices while preserving their data. It provides data access and mobilibetween two VPLEX Metro clusters within synchronous distances. With a unique scale-up and scale-out architecture, VPLEX Metro's advanced data-caching and distributed-cache coherency provides workload resiliency, automatic sharing, balancing, and failover of the storage domains. It enables both local and remote data access with predictable service levels.

Note Any stencapsulated.

Storage best torage best practices that apply to directly-accessed storage volumes apply to t

e resources or cause additional work in a storage

o

practice – partition alignment

Svirtual volumes as well. One important best practice to follow is partition alignmenfor any x86-based OS platform.

Misaligned partitions can consumarray, leading to performance loss. With misaligned partitions, I/O operations to an array cross track or cylinder boundaries and lead to multiple read or write requests tsatisfy the I/O operation. This can be avoided by aligning partitions on 32 KB boundaries.

Distributed R1

he distributed mirroring feature of EMC VPLEX Metro-Plex provides the ability to

ly.

mirroring – Ddevice

Tcreate mirrored virtual volumes, where the mirror legs of the volume are supported by physical storage residing at each site of the Metro-Plex. To the hosts, the DR1 device is a single, logical volume with the same volume identity provided by both clusters. I/O to the device can be issued to either VPLEX Metro cluster concurrentThe two VPLEX Metro clusters use advanced data-caching and distributed-cache coherency to provide workload resiliency, automatic sharing, balancing, and failover of storage domains, and enable both local and remote data access with predictable service levels.

Page 24: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 24

VPLEX Metro back-end zoning

Back-end zoning was configured for throughput and redundancy, with each storage array having multiple front-end adapter (FA) connections in the case of Symmetrix VMAX, or SP connections, in the case of a CLARiiON CX4-480 Vblock, to each VPLEX Metro back-end director. The number of VPLEX Metro ports configured depends on the number of LUNs in use and the amount of data transferred from host to array. Each environment should be sized accordingly. Devices were masked to ensure that only the LUNs to be claimed by VPLEX Metro were seen. Back-end zoning was configured on a Cisco MDS 9500 switch. There were two CLARiiON SP ports per one VPLEX Metro back-end zone. The VPLEX Metro back-end ports and COM ports can be validated using the VPLEX Command Line Interface (VPlexcli). After back-end zoning was completed, it was necessary to rediscover the storage array. The storage volumes can be checked using VPlexcli or the Management Console.

VPLEX Metro front-end zoning

Front-end zoning was configured for throughput and redundancy, with each ESX host having two FC adapters (through CNAs) and each adapter being zoned to multiple VPLEX Metro front-end director ports. The number of VPLEX Metro ports configured depends on the number of LUNs in use and the amount of data transferred from host to array. Each environment should be sized accordingly. Each server in the application clusters was configured identically, with access to all of the same LUNs. The front-end ports can be enabled only after VPLEX Metro metavolumes are created. The metavolumes contain critical system configuration data. For more information about metavolumes, refer to the EMC VPLEX Installation and Setup Guide.

VPLEX Metro WAN connectivity

The WAN configuration was designed for redundancy and throughput. WAN ports from each director were connected to the multilayer director switch (MDS) fabric at each simulated location. A two-port inter-switch link (ISL) was configured on the FC switches, and those connections were passed through a WAN emulator to introduce a latency of 100 km.

Migration to VPLEX Metro using LUN encapsulation – disruptive to host access

One method that can be used to migrate LUNs from a directly-accessed storage array to the VPLEX Metro array is encapsulation. In the encapsulation process, the VPLEX Metro takes ownership of a LUN that it sees from the original array. Once it is encapsulated, the host can no longer see the LUN directly from the original array. The host must be configured, through zoning, to see the LUN from the VPLEX Metro. Encapsulation requires that the virtual machine is removed from the inventory in ESX and that the ESX host does a rescan to see the “new” LUN and virtual machine—this is considered a disruptive migration. From a storage utilization perspective, this method requires the least amount of incremental capacity since the original LUNs are being encapsulated and, therefore, do not require a transit LUN that would take up additional capacity.

Page 25: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 25

Migration to VPLEX Metro using VMware Storage VMotion – nondisruptive to host access

VMware Storage VMotion can be used to nondisruptively migrate from a directly-accessed storage array to a LUN presented through VPLEX Metro. This is accomplished by presenting both the original LUN and the new VPLEX Metro LUNs to the hosts at the same time and then executing VMware Storage VMotion from the original LUN to the VPLEX Metro LUN. Assuming there is no need to revert to the original LUN, that original LUN can then be reclaimed by the storage array and the disk capacity made available for other purposes. From a storage utilization perspective, this method requires additional storage capacity during the migration, since the new LUNs need to be created on the VPLEX Metro prior to executing VMotion from the existing LUN. However, the original LUN can then be destroyed and the capacity added back into the unused pool on the array.

Migration to VPLEX Metro DR1 – disruptive to host access

If downtime is not a concern, data migration to a VPLEX Metro DR1 device can be done without the need for an extra transit LUN. The migration procedure is as follows:

Step Action

1 Power off the virtual machine and remove it from the vCenter inventory.

2 Encapsulate the original non-VPLEX Metro LUN.

3 Remove the LUNs from the storage group.

4 Add these LUNs to a VPLEX storage group or storage masking.

5 Rescan the storage arrays.

6 Claim the storage volumes with the Application Consistency option.

7 Create an extent and local device.

8 Create the DR1 devices.

9 Add the newly-encapsulated LUN to the DR1 device and create the virtual volumes over the DR1 device.

10 Assign the virtual volume to the host view.

11 Rescan the ESX host to see the new DR1 device.

12 Add the virtual machine to the inventory and power up the virtual machine.

From a storage utilization perspective, this method requires the least amount of incremental capacity since the original LUNs are being encapsulated and so do not require a transit LUN, which takes up additional capacity during the migration. However, additional capacity is needed for the remote device of the DR1 virtual volume.

Page 26: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 26

Migration from Site A to Site B VPLEX Metro LUN – nondisruptive to host access

In some situations, it may be required to migrate from a VPLEX Metro LUN (non-DR1) at one site in a Metro-Plex to a VPLEX Metro LUN (non-DR1) at the other site in a Metro-Plex. This can be accomplished through the use of a transit DR1 device spanning the Metro-Plex. The procedure is as follows:

Step Action

1 Present both the original VPLEX Metro LUN and the VPLEX Metro transit DR1 device to the hosts at both sites.

2 Use VMware Storage VMotion to migrate from the Site A VPLEX Metro LUN to the transit DR1 device.

3 Use VMware VMotion to migrate the virtual machine to the Site B host.

4 Use VMware Storage vMotion to migrate from the transit DR1 device to the Site B VPLEX Metro local LUN.

From a storage utilization perspective, this method requires additional storage capacity during the migration, since a new LUN needs to be created on the VPLEX Metro prior to migrating with VMware Storage VMotion from the existing LUN. However, the original LUN can then be destroyed and the capacity added back into the unused pool on the array.

Page 27: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 27

VPLEX Metro administration

Introduction to VPLEX Metro administration

When bringing an existing storage array into a virtualized storage environment, the options are to: • Encapsulate storage volumes from existing storage arrays that have already been

used by hosts

or • Create a new VPLEX Metro LUN and migrate the existing data to that LUN

From a migration time perspective, encapsulation is much faster (approximately 4-5 times faster in this environment) than migration to a new VPLEX Metro LUN via Storage VMotion. The benefit of Storage VMotion is that the application server experiences no downtime, whereas with the encapsulation option the hosts need to rescan and replace the VMs, which results in downtime. VPLEX Metro provides an option to encapsulate the existing data using VPlexcli. When application consistency is set (using the –appc flag), the volumes claimed are data-protected and no data is lost.

VPLEX Metro administration procedure

In this solution, administration of VPLEX Metro was done primarily through the Management Console, although the same functionality exists with VPlexcli.

On authenticating to the secure web-based GUI, the user is presented with a set of on-screen configuration options, listed in the order of completion. For more information about each step in the workflow, refer to the EMC VPLEX Management Console online help. The following table summarizes the steps to be taken, from the discovery of the arrays up to the storage being visible to the host.

Step Action

1 Discover available storage VPLEX Metro automatically discovers storage arrays that are connected to the back-end ports. All arrays connected to each director in the cluster are listed in the Storage Arrays view.

2 Claim storage volumes Storage volumes must be claimed before they can be used in the cluster (with the exception of the metadata volume, which is created from an unclaimed storage volume). Only after a storage volume is claimed, can it be used to create extents, devices, and then Virtual Volumes.

3 Create extents Create extents for the selected storage volumes and specify the capacity.

4 Create devices from extents A simple device is created from one extent and uses storage in one cluster only.

Page 28: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 28

5 Create a Virtual Volume Create a Virtual Volume using the device created in the previous step.

6 Register initiators When initiators (hosts accessing the storage) are connected directly or through a Fibre Channel fabric, VPLEX Metro automatically discovers them and populates the Initiators View. Once discovered, you must register the initiators with VPLEX Metro before they can be added to a storage view and access storage. Registering an initiator gives a meaningful name to the port’s WWN, which is typically the server’s DNS name, to allow you to easily identify the host.

7 Create a storage view For storage to be visible to a host, first create a storage view and then add VPLEX Metro front-end ports and virtual volumes to the view. Virtual volumes are not visible to the hosts until they are in a storage view with associated ports and initiators. The Create Storage View wizard enables you to create a storage view and add initiators, ports, and virtual volumes to the view. Once all the components are added to the view, it automatically becomes active. When a storage view is active, hosts can see the storage and begin I/O to the virtual volumes. After creating a storage view, you can only add or remove virtual volumes through the GUI. To add or remove ports and initiators, use the CLI. For more information, refer to the EMC VPLEX CLI Guide.

For comprehensive information about VPLEX Metro commands, refer to the EMC VPLEX CLI Guide.

Page 29: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 29

Microsoft Office SharePoint Server 2007

Microsoft SharePoint Server 2007 overview

This section covers the following topics:

• Microsoft SharePoint Server 2007 configuration

• Validation of the virtualized SharePoint Server 2007 environment

Microsoft SharePoint Server 2007 configuration

Microsoft SharePoint Server 2007 configuration overview

With customers increasingly moving their SharePoint environments into a virtualized environment, server farms may be built on multiple sites with complicated back-end storage supported. This leads to two challenges:

• How can an existing SharePoint Server move between different data centers without interrupting operations on the farm?

• How can storage maintenance costs be reduced?

The virtualized SharePoint Server 2007 overcomes these challenges by building on vSphere 4.0 with VPLEX Metro supported, which enables disparate storage arrays at multiple locations to provision a single, shared array on the SharePoint 2007 farm.

Microsoft SharePoint Server 2007 design considerations

In this SharePoint 2007 environment design, the major configuration highlights include:

• The SharePoint farm shared two of the five ESX servers on one site, with virtualized SQL and Oracle environments.

• Web front-ends (WFEs) were also configured as query servers in order to improve query performance through a balanced load (recommended for enterprise-level SharePoint farms).

• User request load was balanced across all available WFEs by using a context-sensitive network switch.

The following sections define the SharePoint Server 2007 application architecture for the virtualized data center.

Multi-server SharePoint Server 2007 farms use a three-tier web application architecture, as follows:

• Web server tier—coordinates user requests and serves web content.

Page 30: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 30

• Application tier—services specific requests including:

− Excel

− Document conversions

− Central administration

− Content indexing

• Database tier—manages document content, SharePoint farm configuration, and search databases.

Microsoft SharePoint Server 2007 farm virtual machine configurations

The following table outlines the virtual machine configurations of the SharePoint Server 2007 farm.

Configuration Description

Three WFE VMs This division of resources offers the best search performance and redundancy in a virtualized SharePoint farm. As WFE and query roles are CPU-intensive, the WFE VMs were allocated four virtual CPUs with 4 GB of memory. The query (Search) volume was configured as a 100 GB virtual disk.

Index Server The Index Server was configured as the sole indexer for the portal along with a dedicated WFE role. This means that while the index virtual machine is crawling for content, it can use itself as the WFE to crawl. This minimizes network traffic and ensures that the SharePoint farm performance does not suffer when a user-addressable WFE is affected by the indexing load. Four virtual CPUs and 6 GB of memory were allocated for the Index Server. The indexing process needs to merge index content, which requires double disk space. Therefore, a 150 GB virtual search disk was allocated.

Application Excel Servers

Two virtual CPUs and 2 GB of memory were allocated for the Application and Excel Servers as these roles require less resources.

SQL Server Four virtual CPUs and 16 GB of memory were allocated for the SQL Server virtual machine as CPU utilization and memory requirements for SQL in a SharePoint farm are high. With more memory allocated to the SQL virtual machine, the SQL Server becomes more effective in caching SharePoint user data, leading to fewer required physical IOPS for storage and better performance.

Page 31: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 31

Virtual machine configuration and resource allocation

The following table details the virtual machine configuration of the SharePoint farm with allocated resources.

Server Role Quantity vCPUs Memory (GB) Bootdisk (GB) Search Disk (GB)

WFE Servers 3 4 4 40 100

Index Servers 1 4 6 50 150

Application Servers 1 2 2 40 Not Applicable

Excel Server(Host Central Admin)

1 2 2 40 Not Applicable

SQL Server 2008 1 4 16 40 Not Applicable

To summarize, in this virtualized environment, SharePoint 2007 infrastructure resource allocations totaled:

• vCPUs: 24

• Memory: 38 GB

• Boot disk: 290 GB

• Search disk: 450 GB

Page 32: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 32

Testing approach—SharePoint farm user load profile

KnowledgeLake DocLoaderLite was used to populate SharePoint with random user data. It took the documents, copied and distributed them to the SharePoint farm’s document library, based on a load profile, while the Microsoft Visual Studio Team System (VSTS) emulated the client user load.

The following table shows the document distribution in the virtualized SharePoint farm.

Document type No. of documents Average doc size (KB) Percentage

.doc 289056 261.6 15.79%

.docx 285902 110.3 15.62%

.gif 90514 76.5 4.94%

.jpg 71566 95.0 3.91%

.mpp 287140 240.6 15.69%

.pptx 269118 199.6 14.70%

.vsd 262014 485.4 14.31%

.xlsx 275172 27.0 15.03%

Total 1830482 187.0 100.00%

During validation, a Microsoft heavy user load profile was used to determine the maximum user count that the Microsoft SharePoint 2007 server farm could sustain while ensuring that average response times remained within acceptable limits. Microsoft standards state that a heavy user performs 60 requests per hour; that is, a request every 60 seconds (See the following article for additional information on user load guidelines: http://technet.microsoft.com/en-us/library/cc261795.aspx).

The user profiles in this testing consisted of three user operations:

• 80 percent browse

• 10 percent search

• 10 percent modify

Page 33: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

Validation of the virtualized SharePoint Server 2007 environment

Test summary In addition to validating the SharePoint Server 2007 operations before and after

encapsulation into the VPLEX Metro cluster, the following sections also validate the cross-site testing of VMotion during the run.

The baseline test is performed first to log the SharePoint 2007 farm base performance. The test then validates the performance impact after the CLARiiON FLARE® LUNs for the SharePoint farm are encapsulated into the VPLEX Metro cluster. VMware VMotion is tested between the local site (Site A) and the remote site (Site B) with up to 100 km distance.

During test validation, Virtual Studio Team System (VSTS) continuously generates workloads (for example, browse the portal and sub sites, random search document, and random replace document with another one) against the WFEs. These operations keep the WFE CPU utilization at around 80 percent in each test session.

SharePoint 2007, VPLEX Metro, and VMware performance data were logged for analysis during the test run lifecycle. This data presents an account of results from VSTS 2008, which generates continuous workload (Browse/Search/Modify) to the WFEs of the SharePoint 2007 farm, while simultaneously consolidating the SQL and Oracle OLTP workload on the same VMware vSphere 4.0 data center.

Validation without encapsulation to VPLEX

The following image shows the baseline performance of passed tests per second without encapsulating into the VPLEX LUNs on the SharePoint virtual machines.

With a mixed user profile of 80/10/10, the virtualized SharePoint farm can support a maximum of 107,400 users with 1 percent concurrency, while satisfying Microsoft’s acceptable response time criteria, as shown in the following tables.

User activity as percentages - Browse/Search/Modify

Acceptable response time (seconds)

Baseline response time (seconds)

80/10/10 <3/<3/<5 2.41/1.79/1.48

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 33

Page 34: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

Content mix - Browse/Search/Modify

Requests per second (RPS)

Microsoft user profile

Concurrency%

Maximum user capacity

80/10/10 17.4 Heavy 1 107,400

Validation with VMotion running between local and remote sites

This white paper validates the impact of VMware VMotion across local and remote sites when running the VSTS workload against the SharePoint 2007 farm.

VMotion can be triggered from the VMware vCenter Server. Though VMotion, the WFEs, Index, and SQL Server can be migrated from Site A and Site B, and vice versa. While running VMotion between sites, the transactions per second will fluctuate. This is because when migrating the virtual machine from Site A to Site B, the relatively light workload on Site B temporarily decreases the farm response time (Browse/Search/Modify) and increases the passed tests per second.

While migrating the virtual machine back to its original host, there may be a temporarily-increased workload on the original host. Therefore, the response time and passed tests per second may fluctuate during the VMotion migration process and slightly impact the passed tests per second. See the previous table showing the RPS and average farm operation response time for this comparison. However, VMotion does not interrupt the application running and provides the data center with the capability to manually reallocate the resource across sites, as shown in the following image.

The following tables illustrate the test results when running VMotion between Site A and Site B.

User activity as percentages - Browse/Search/Modify

Acceptable response time (seconds)

Average response time during VMotion (seconds)

80/10/10 <3/<3/<5 2.38/1.47/1.00

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 34

Page 35: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 35

Content mix - Browse/Search/Modify

Requests per second (RPS)

Microsoft user profile

Concurrency%

Maximum user capacity

80/10/10 18.6 Heavy 1 111,600

Validation of cross-site VMotion

This white paper validates long-distance site VMotion on virtualized VPLEX Metro LUNs. Cross-site VMotion test validation contained no distance latency. The same SharePoint client workload was engaged, and VMotion against the SharePoint farm server roles was triggered during the test runs.

VMotion transfers the running architectural state of a virtual machine across the underlying VMware ESX Server system, therefore the running state of the virtual machine has an impact on the duration of VMotion. A 100 km distance between two data centers causes longer cross-site VMotion in all server roles. This impact is in proportion to the resource allocated to the virtual machine.

For example, the average VMotion duration for WFEs with distance latency is 34 seconds longer than for those without latency, while the Index Server, which has a larger memory configuration (6 GB) than the WFEs, needs 5-6 more seconds to finish VMotion. And the SharePoint SQL Server, which has the greatest number of database activities and the largest memory configuration (16 GB) in the farm, needs almost three times the VMotion duration than the one without latency.

The following table illustrates VMotion duration, with and without latency, for the SharePoint Farm Server roles.

SharePoint Farm Server role

VMotion duration without distance latency (seconds)

VMotion duration with 100 km distance (seconds)

WFEs 39 73

Index Server 37 78

SPS SQL Server 90 217

Page 36: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 36

Microsoft SQL Server 2008

Microsoft SQL Server 2008 overview

This section covers the following topics:

• Microsoft SQL Server 2008 configuration

• Validation of the virtualized SQL Server 2008 environment

Microsoft SQL Server 2008 configuration

Design considerations

The SQL Server test configuration is based on the following profile:

• Number of SQL users supported: 40,000

• Simulated user workload with 1 percent concurrency rate and zero think time, consistent with Microsoft testing methodologies

• User data: 1 TB

SQL Server test application

The SQL load test tool used in this environment simulates an OLTP workload. It comprises a set of transactional operations designed to exercise system functionalities in a manner representative of a complex OLTP application environment.

OLTP workloads

The OLTP application used to generate the user load in this test environment is based on the TPC Benchmark-E (TPC-E) standard. TPC-E testing is composed of a set of transactions that represent the processing activities. The database schema, data population, transactions, and implementation rules have been designed to be broadly representative of modern OLTP systems. The TPC-E application models the activity of a brokerage firm that:

• Manages customer accounts

• Executes customer trade orders

• Tracks customer activity with financial markets

Key components of SQL Server testing

This benchmark is composed of a set of transactions that are executed against three sets of database tables that represent market data, customer data, and broker data. A fourth set of tables contains generic dimension data such as zip codes.

Page 37: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 37

Partitioning the SQL database

SQL table partitioning is used to segment data into smaller, more manageable sections. Table partitioning can lead to better performance through parallel operations. The performance of large-scale operations across extremely large data sets (for instance many millions of rows) can benefit by performing multiple operations against individual subsets in parallel.

The individual subsets can be moved to several disk drives to effectively reduce I/O contention.

The number of table partitions to allocate depends on:

• Table size

• LUN utilization

The broker and customer file groups for this application are the largest and best candidates for partitioning.

Broker and customer file groups partitioning

The broker and customer file groups are each divided into 11 partitions. Each partition is stored in a VMDK file, each on a separate LUN. The first 10 partitions hold data generated during the initial data population. The eleventh partition holds the new data generated during simulated user activity.

Broker and customer file groups

The following table details the file groups used in the test application.

File group name Table name Drive (directory with mount point)

broker_fg1-10 • CASH_TRANSACTION • SETTLEMENT • TRADE • TRADE_HISTORY

S:\B\B1-B10

customer_fg1-10 • HOLDING • HOLDING_HISTORY

S:\C\C1-C4

broker_fg • CHARGE • COMMISSION_RATE • TRADE_TYPE • TRADE_REQUEST • BROKER

S:\B\B0

customer_fg • ACCOUNT_PERMISSION • CUSTOMER • CUSTOMER_ACCOUNT • CUSTOMER_TAXRATE • HOLDING_SUMMARY

S:\C\C0

Page 38: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 38

market_fg • EXCHANGE • INDUSTRY • SECTOR • STATUS_TYPE • COMPANY • COMPANY_COMPETITOR • DAILY_MARKET • FINANCIAL • LAST_TRADE • NEWS_ITEM • NEWS_XREF • SECURITY • WATCH_ITEM • WATCH_LIST

S:\TPCE_ROOT

misc_fg • TAXRATE • ZIP_CODE • ADDRESS

S:\TPCE_ROOT

Tempdb Not applicable G:\, H:\, I:\, J:\

Transaction Log Not applicable T:\

Page 39: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 39

Validation of the virtualized SQL Server 2008 environment

Test summary The OLTP databases were populated with data using the Microsoft TPCE

Benchmark kit. The Benchmark kit was also used to simulate user load during the validation testing. The following test scenarios were performed on the SQL Server 2008 OLTP environments.

• Baseline testing was performed to establish a comparison point for subsequent testing

• The application storage was encapsulated into a DR1 configuration using the method described in “VPLEX Metro > Migration to VPLEX Metro DR1 – disruptive to host access”.

• Application availability testing was performed while user workload is generated for the SQL Server OLTP application, SharePoint Server 2007, and Oracle applications.

Validation prior to encapsulation

The SQL Server OLTP application was running on two instances, one in each site. The SQL Server instances were running on a guest host on a VMware ESX server. Both instances used CLARiiON storage attached to the VMware ESX servers. Both instances performed in a similar manner.

The SQL Server OLTP application displayed a skewed data access pattern. Some of the LUNS were used heavily and others were used lightly. The following table represents the performance of the five busiest storage devices during the validation testing.

LUN IOPS Latency

Broker\B10 569.9 9 ms

Broker\B2 712.6 8 ms

Broker\B6 719.3 6 ms

Broker\B1 714.7 5 ms

Broker\B8 714.7 5 ms

Validation after encapsulation

The storage for the SQL Servers was encapsulated and virtualized. The storage was converted to a DR1 configuration across the two sites and made available to the SQL Servers through VPLEX Metro. The OLTP application was then restarted, with the same load as before, to monitor the impact to storage performance. The following table represents the performance of the five busiest storage devices during the validation testing.

Page 40: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 40

Before encapsulation After encapsulation LUN IOPS Latency IOPS Latency

Broker\B0 569.9 9 ms 550.0 10 ms

Broker\B1 712.6 8 ms 683.5 9 ms

Broker\B2 719.3 6 ms 689.5 7 ms

Broker\B3 714.7 5 ms 712.5 5 ms

Broker\B4 714.7 5 ms 712.5 5 ms

The busiest LUNs showed a slight increase in average latency and a small decrease in IOPS.

Validation of cross-site VMotion

Once the storage is accessed through the VPLEX Metro, long-distance VMotion is validated. Each SQL Server OLTP application’s storage is configured as a DR1 device. A copy of the data resides in both sites of the VPLEX Metro.

In this validation test, the guest virtual machines hosting the SQL Server application were migrated through VMotion between sites while under load. The workload was generated from the other applications as well. Since a copy of the data was available in both sites, the times listed in the following table represent the duration for the running application utilizing 16 GB of memory to be transferred from one VMware ESX server to another at the opposite site. The OLTP application is available while the transfer occurs.

Distance simulation VMotion duration

0 km 3 minutes 57 seconds

100 km 5 minutes 17 seconds

Page 41: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 41

SAP

SAP overview This section covers the following topics:

• SAP configuration

• Validation of the virtualized SAP environment

SAP configuration

SAP ERP 6.0 SAP ECC6 (ERP 2005) is a world-class, fully integrated solution that fulfills the core

business needs of midsize and large organizations across all industries and market sectors. Powered by the SAP NetWeaver technology platform, SAP ERP 6.0 helps enterprises to perform financial analysis, human capital management, procurement and logistics, product development and manufacturing, and sales and service, supported by functionality for analysis, corporate services, and end-user service delivery. Together with SAP NetWeaver and a repository of enterprise services, SAP ERP 6.0 can serve as a solid business process platform that supports continued growth, innovation, and operational excellence.

SAP BW 7.0 SAP NetWeaver Business Warehouse (SAP NetWeaver BW), also known as SAP

BI, is a Business Intelligence (BI), analytical reporting, and data warehousing solution. It is a part of the SAP NetWeaver technology stack and is tightly integrated with other SAP applications in an enterprise SAP landscape to support business operations.

Business scenario

VPLEX Metro enables virtualized storage for applications to access LUNs between data center sites and provides the ability to move virtual machines between data centers. This optimizes data center resources and results in zero downtime for data center relocation and server maintenance.

Since SAP applications and modules can be distributed amongst several virtual servers, and normal operations involve extensive communication between them, it is critical to ensure that communication is not disrupted when individual virtual machines are moved from site to site.

To demonstrate the ability of VPLEX Metro to enable the nondisruptive migration of virtual machines from one site to another, an SAP BW extraction process was configured to transfer data from SAP ERP to SAP BI. During the extraction process, SAP ERP instances, including DB and CI, were moved between sites. Tests show that the extraction process was not interrupted during the move and had little impact on the overall performance of the operation.

Page 42: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

Design considerations

The following image shows the configuration of the logical landscape for SAP.

The following table describes the virtual machine resource allocation.

Virtual Machine vCPU Memory (GB)

SAP ERP DB 4 16

SAP ERP CI 4 16

SAP BI DB 4 16

SAP BI CI 4 16

The following design considerations were included in the SAP setup.

• The SAP landscape consists of ERP and BI applications that were configured as a distributed environment and deployed on two clusters, one at each site, separated by 100 km. Four virtual machines hosting ERP DB and CI, and BI DB and CI instances were run on two ESX servers in each data center.

• SAP ERP and SAP BI were installed as distributed systems, which means the CI and DB were running on different virtual machines. Note that /sapmnt/<SID> and

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 42

Page 43: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 43

/usr/sap/trans should be globally accessible. In this case, both directories were configured on NFS and mounted when the virtual machines started.

• All SAP hosts were configured to use a shared VMFS volume under VPLEX Metro management. The disks of the VMs on the VMFS volumes were accessible by both clusters. VMs on each side of the data center were configured into a cluster with a shared pool of resources.

The following table details the mount points used.

Mount point Accessibility RAID type Capacity

SAP ERP <RED>

/sapmnt/RED global RAID-5 10 GB

/usr/sap/RED local RAID-5 8 GB

/usr/sap/trans global RAID-5 10 GB

/db2/RED local RAID-5 10 GB

/db2/RED/sapdata1 local RAID-5 82 GB

/db2/RED/sapdata2 local RAID-5 82 GB

/db2/RED/sapdata3 local RAID-5 82 GB

/db2/RED/sapdata4 local RAID-5 82 GB

/db2/RED/saptemp1 local RAID-5 1 GB

/db2/RED/log_dir local RAID-1/0 15 GB

/db2/RED/log_archive local RAID-5 15 GB

SAP BI <RBD>

/sapmnt/RBD global RAID-5 10 GB

/usr/sap/RBD local RAID-5 8 GB

/usr/sap/trans global RAID-5 10 GB

/db2/RBD local RAID-5 10 GB

/db2/RBD/sapdata1 local RAID-5 82 GB

/db2/RBD/sapdata2 local RAID-5 82 GB

/db2/RBD/sapdata3 local RAID-5 82 GB

/db2/RBD/sapdata4 local RAID-5 82 GB

/db2/RBD/saptemp1 local RAID-5 2 GB

/db2/RBD/log_dir local RAID-1/0 15 GB

/db2/RBD/log_archive local RAID-5 40 GB

Page 44: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 44

Validation of the virtualized SAP environment

Test objectives The test objectives were to:

• Validate the nondisruptive movement of SAP DB and CI instances across data centers, enabled by VMotion and VPLEX Metro

• Validate SAP across to virtual storage across data centers

Test scenario BW extraction was chosen as the test scenario. 1.5 million records from the ERP

system were extracted into the persistent staging area (PSA) of a data source in the BI system. The intention of this scenario was to maintain an active connection between BI and ERP during the extraction, so that business continuity and a federated solution landscape under VMotion could be verified by a successful extraction process.

Note The systems were not optimized for the test scenario as SAP system performance was not the key objective in this scenario.

Test procedure The test was conducted as described in the following table.

Step Action

1 A BW extraction was initiated while SAP BI instances (DB and CI) resided in one data center and SAP ERP instances (DB and CI) were in the other data center.

2 A BW extraction was initiated. During the extraction process, VMotion migrations were conducted sequentially to the ERP DB instance and ERP CI instance.

3 A BW extraction was initiated while SAP ERP instances (DB and CI) and SAP BI instances (DB and CI) resided on the same data center.

Page 45: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 45

Test results The detailed test statistics are shown in the following table

Scenario Records transferred

Extraction duration (MM:SS)

VMotion duration (MM:SS)

SAP ERP and SAP BI are distributed in two data centers

15,800,009 7:02 N/A

SAP ERP migrates to the same data center where SAP BI resides

15,800,009 7:35

01:53* (SAP ERP DB)

00:52 (SAP ERP CI)

SAP ERP and SAP BI are both on the same data center

15,800,009 7:02 N/A

* Since SAP ERP DB and SAP ERP CI are configured to have different data stores, the time for vMotion varies from one site to the other in these two instances, depending on the available network bandwidth and FC connectivity.

The validation test demonstrates that:

• Connections between two interdependent SAP applications in an SAP-federated landscape are uninterrupted during and after the completion of a VMotion operation facilitated by VPLEX Metro.

• SAP applications can access VPLEX Metro managed storage across the boundaries between data centers.

• The VMotion operation has minimal impact on an existing process.

Page 46: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

Oracle

Oracle overview

Oracle E-Business Suite 12.1 provides organizations of all sizes, across all industries and regions with a global business foundation that reduces costs and increases productivity through a portfolio of rapid value solutions, integrated business processes and industry-focused solutions.

This section covers the following topics:

• Oracle configuration

• Validation of the virtualized Oracle environment

Oracle configuration

Configuration of the Oracle E-Business Suite environment

The following image illustrates the Oracle configuration in this solution.

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 46

Page 47: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 47

Design considerations

The Oracle E-Business Suite installation focuses on proving a basis for business continuity. It does not address all aspects of high availability, but does highlight the benefits of virtualization.

In particular, the following single points of failure need to be addressed in a fully high-availability, production environment:

• A single-instance relational database management system (RDBMS) would be a multi-node, real application cluster (RAC).

• The NFS server would be a multi-node cluster.

• The load balancing provided by Apache software would be a clustered pair of hardware load balancers.

The following table details the configuration of the Oracle E-Business Suite Infrastructure Server used in this solution.

Part Description

Operating system Red Hat Enterprise Linux 5 (64 bit) release 5.3

Kernel 2.6.18-128.el5 #1 SMP

CPU 2: vCPUs

Memory 4096 MB

Disk configuration • Root 80 GB virtual disk • 2: 150 GB mapped raw LUNs, which are

combined to form the /u01/apps file system of 300 GB, which is shared out to all other Oracle E-Business Suite Servers.

Apache Server (load balancer)

Apache/2.2.3 (Red Hat) http://G2SVOEBSINFRA01.g2sv.emc.com/

Page 48: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 48

Oracle E-Business Suite Database Server

The following table details the configuration of the virtual machine holding the Oracle E-Business Suite Database Server in this solution.

Part Description

Operating system Red Hat Enterprise Linux 5 (64 bit) release 5.3

Kernel 2.6.18-128.el5 #1 SMP

CPU 2: vCPU

Memory 4096 MB

Disk configuration • Root 80 GB virtual disk • Oracle HOME and DB Files – 500 GB /u01 • Oracle REDO 1 – 50 GB /u02 • Oracle REDO 2 – 50 GB /u03 • Shared APPL_TOP – 300 GB NFS mount from

G2SVOEBSINFRA01.g2sv.emc.com:/u01/apps as /u01/apps

Oracle database version Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64-bit production with the partitioning, OLAP, data mining and real application testing options

Oracle Applications version

12.1.1

Oracle E-Business Active Tier components

• Database • Concurrent processing • Forms – As this is a DB node, these are not

included in the load balancer, so are not accessible by end users

• Web - As this is a DB node, this is not included in the load balancer, so is not accessible by end users

IAS version 10.1.3.4.0

From the standard installation of the Oracle E-Business Suite Vision Demo Database to a single mount point /u01, the only change after install was to increase the number of sessions (4000) and process (2000). The online REDO logs were also moved to /u02 and /03.

Nine log groups were created with two members each, one member on /u02 the other member on /u03. Each log file created was 300 MB in size.

Page 49: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 49

Oracle E-Business Suite Application Servers 1 and 2

The following table details the configuration of the virtual machines for the Oracle E-Business Suite Application Servers 1 and 2 in this solution.

Part Description

Operating system Red Hat Enterprise Linux 5 (64 bit) release 5.3

Kernel 2.6.18-128.el5 #1 SMP

CPU 2: vCPU

Memory 4096 MB

Disk configuration • Root 80 GB virtual disk • Shared APPL_TOP – 300 GB NFS mount from

G2SVOEBSINFRA01.g2sv.emc.com:/u01/apps as /u01/apps

Oracle Applications version

12.1.1

Oracle E-Business Active Tier components

• Forms • Web

IAS version 10.1.3.4.0

Oracle E-Business Suite Infrastructure Server

This server provides the shared APPL_TOP, via NFS, file system to each of the other Oracle E-business Suite servers. It also performs the function of a software-based network load balancer.

Page 50: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 50

Validation of the virtualized Oracle environment

Tuning and baseline tests

The following tests were completed:

• Availability tests

• Tuning tests

Availability tests An Oracle Application Testing Suite (OATS) server (version 9.01.0165) was installed and configured on a virtual machine, running outside of the test environment, with Microsoft Windows Server 2003 R2 Enterprise Edition Service Pack 2, providing two virtual CPUs and 4 GB of RAM.

Using OpenScript, a script was recorded that logged in to the Oracle E-Business Suite, queried the General Ledger journals, and reviewed a journal using T-Accounts. The script then logged off.

The script was amended so that it would repeat the query of the General Ledger journal, review and only perform the login at startup, and log out at the finish. Two scripts were created: one for Oracle Application Server mid-tier 1, and another for Oracle Application Server mid-tier 2.

Using Oracle Load Testing for Web Applications, each of the two test scripts was allowed to ramp up 10 virtual users each. This provided a load of 20 users, split evenly, with 10 users per mid-tier.

Rather than test performance, the focus of this load was to ensure that the user experience remained the same while the transition takes place, using VMware VMotion and Storage VMotion.

The test will be deemed successful if all the virtual users continue without error during transitions.

Tuning tests To baseline the original configuration before performing any changes, a long-running concurrent program was selected to perform extensive database I/O. The concurrent program Gather Schema Statistics, which runs on the database virtual machine, is an excellent candidate to place this level of demand on the test infrastructure and run for the entire database.

This concurrent program was submitted from the command line with the following:

$FND_TOP/bin/CONCSUB Apps/Apps SYSADMIN 'System Administrator' SYSADMIN WAIT=N CONCURRENT FND FNDGSCST ALL 10 '""' NOBACKUP '""' LASTRUN GATHER '""' Y

The duration to complete the concurrent program was recorded as the baseline, which was then used as a comparison following each test.

During the running of the concurrent program, the disk, CPU, and network were also charted using the VSphere Client tool.

Page 51: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

Baseline test A baseline Oracle Application Testing Suite Load testing for Web applications of 20

virtual users was run. This produced a steady response time of 1-3 seconds for the hour of the baseline test, as shown in the following image.

Encapsulate RDM (Raw Device Mapping) to vStorage

After encapsulating the volumes into VPLEX Metro, the baseline Oracle Application Testing Suite Load testing for Web applications of 20 virtual users was repeated. The response the virtual users experienced remained, almost exclusively, within the 1-3 seconds range and the average response dropped to 1.43 seconds. The response time is illustrated in the following image.

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 51

Page 52: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion migration test

The next test performed was to migrate all the virtual machines from Site A to Site B. This was done after about 25 minutes of starting the 20 virtual users on OATS. The impact on the virtual users was imperceptible both during and after migration. Each migration took less than 5 minutes, between 25 minutes and 40 minutes into the test cycle. The response time is illustrated in the following image. The highlighted area indicates the period of the VMotion migration. The average response time for the full test was 1.56 seconds. The average response time during the migration period was 1.62 seconds.

The migration times are detailed in the following table.

Virtual machine Start VMotion VMotion complete Duration (mm:ss)

APPS-01 05:28:45 05:32:21 03:36

APPS-02 05:32:21 05:35:59 03:38

INFRA01 05:35:59 05:39:37 03:38

DB01 05:39:37 05:43:29 03:52

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 52

Page 53: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

100 km distance simulation for FC

The next set of tests simulated two VPLEX Metros separated by 100 km. The baseline Oracle Application Testing Suite Load testing for Web applications of 20 virtual users was repeated. The test consistently gave a response time of 1-3 seconds for the hour of the baseline test, and an average response time of 1.48 seconds, as shown in the following image.

After completion of the baseline test, all the virtual machines were migrated from Site A to Site B using VMotion, while under a load from Oracle Application Testing Suite – Load testing for Web applications with 20 virtual users. The test consistently gave a response time of 1-3 seconds for the hour of the test. The average response time for the full test was 1.59 seconds. The average response time during the migration period was 1.83 seconds, as shown in the following image. The highlighted area indicates the period of the VMotion migration.

The migration times are detailed in the following table.

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 53

Page 54: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 54

Virtual machine Start VMotion VMotion complete Duration (mm:ss)

DB-01 07:33:55 07:38:24 00:04:29

APPS-01 07:38:39 07:42:32 00:03:53

INFRA-01 07:42:32 07:46:30 00:03:58

APPS-02 07:46:30 07:50:32 00:04:02

As expected, the additional 1 ms of network latency slightly increased the time taken to migrate each virtual machine using VMotion. During the migration period, the average response time of transactions also increased but still remained well within acceptable limits.

Batch process test

Before making any changes in the environment, a baseline was taken to see how long it took to gather schema statistics for all the schemas in the database. This test ran from 13:39:15 until 19:26:10, a total of 5 hours 46 minutes and 55 seconds.

Following encapsulation and migration from Site A to Site B, the test ran for a total of 4 hours 52 minutes and 52 seconds. The batch run times are illustrated in the following table.

Test name Time started Time finished

Duration VMotion

Baseline 13:39:15 19:26:10 5:46:55 N/A

After encapsulation

12:07:33 16:24:02 4:16:29 N/A

During standard VMotion

10:46:43 15:39:35 04:52:52 03:28

100 km baseline

18:22:05 23:46:17 05:24:12 N/A

100 km latency - VMotion

08:23:47 13:49:55 5:26:08 04:29

No failures occurred in the batch process during or after the live transition from Site A to Site B and the process was invisible to the batch process.

The batch process ran on the same virtual machine as the database and network latency was not considered a factor in its normal running. Therefore, the baseline batch process was not tested for the effects of network latency. The migration time of 4 minutes and 29 seconds confirmed that VMotion would run longer with a larger network latency while the batch was running.

Page 55: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 55

Conclusion

Summary To meet the business challenges presented by today's on-demand 24x7 world, data

must be highly available—in the right place, at the right time, and at the right cost to the enterprise. This solution demonstrates the new virtual storage capabilities of VPLEX Metro in a virtualized application environment incorporating VMware vSphere 4, SAP, Microsoft SharePoint Server 2007, SQL Server 2008, and Oracle E-Business Suite Release 12.

With EMC VPLEX Metro, companies can manage their virtual storage environments more effectively through:

• Transparent integration with existing applications and infrastructure.

• The ability to migrate data between remote data centers with no disruption in service.

• Increased protection in the event of unplanned application outages.

• The ability to migrate data stores across storage arrays nondisruptively for maintenance and technology refresh operations.

Findings This solution validated the effectiveness of VPLEX Metro for presenting LUNs to

ESX servers spanning multiple data center locations separated by 100 km to enable workload migration, using VMware’s VMotion technology. As detailed in the application sections, virtual machine migration times were all well within acceptable ranges, and in all cases allowed for continuous user access during migration.

A distributed mirrored volume was used to place the same data at both locations and maintain cache coherency. Testing validated that it worked well within expected tolerances at 100 km.

Testing proved that a live transfer of virtual machines from Site A to Site B can be achieved quickly with no perceptible impact on end users.

The capabilities of VPLEX Metro demonstrated in this testing highlight its potential to enable true dynamic workload balancing and migration across metropolitan data centers, to support operational and business requirements. VPLEX Metro augments the flexibility introduced into a server infrastructure by VMware vSphere with storage flexibility, to provide a truly scalable, dynamic, virtual data center.

Next steps To learn more about this and other solutions contact an EMC representative or visit:

www.emc.com.

Page 56: VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by ...

VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX,

EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 56

References

White papers For additional information, see the following white paper.

• EMC Virtual Infrastructure for Microsoft Applications—Data Center Solution Enabled by EMC Symmetrix VMAX and VMware vSphere 4 - Applied Technology

Product documentation

For additional information, see the following product documents.

• EMC Support Matrix

• EMC VPLEX CLI Guide

• EMC VPLEX Installation and Setup Guide

• EMC VPLEX Management Console online help

Other documentation

For additional information, see the following documents.

• Vblock Infrastructure Packages Reference Architecture

• Master Guide: SAP ERP 6.0 - Support Release 3

• Installation Guide SAP ERP 6.0 - EHP4 Ready ABAP on Linux: IBM DB2 for Linux, UNIX, and Windows Based on SAP NetWeaver 7.0 Including Enhancement Package 1

• Using Load-Balancers with Oracle E-Business Suite Release 12

For more information on VMware VMotion and CPU compatibility, see the following document:

• VMware White Paper: Best Practice Guidelines for SAP Solutions on VMware Infrastructure