VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to...

128
EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com EMC ® VPLEX™ GeoSynchrony ® 5.1 Product Guide P/N 300-013-921-01

Transcript of VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to...

Page 1: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

EMC CorporationCorporate Headquarters:

Hopkinton, MA 01748-9103

1-508-435-1000www.EMC.com

EMC® VPLEX™GeoSynchrony® 5.1

Product GuideP/N 300-013-921-01

Page 2: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Copyright © 2012 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to changewithout notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NOREPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION,AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULARPURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories sectionon EMC Powerlink®.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

EMC VPLEX Product Guide2

Page 3: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Contents

Chapter 1 Introducing VPLEXVPLEX overview..................................................................................................... 14Mobility .................................................................................................................... 15Availability .............................................................................................................. 16Collaboration ........................................................................................................... 18VPLEX product offerings....................................................................................... 19Robust high availability with VPLEX Witness................................................... 23Upgrade paths......................................................................................................... 25VPLEX management interfaces ............................................................................ 26New features in this release .................................................................................. 27

Chapter 2 VPLEX VS2 Hardware OverviewSystem components................................................................................................ 30The VPLEX engine.................................................................................................. 34The VPLEX director................................................................................................ 35VPLEX cluster architecture.................................................................................... 36VPLEX power supply modules ............................................................................ 37Power and Environmental monitoring................................................................ 38VPLEX component failures ................................................................................... 39

Chapter 3 VPLEX SoftwareGeoSynchrony ......................................................................................................... 46Management of VPLEX.......................................................................................... 48Provisioning............................................................................................................. 50Data mobility ........................................................................................................... 54Mirroring.................................................................................................................. 55Consistency groups ................................................................................................ 56Cache vaulting......................................................................................................... 58

Chapter 4 System Integrity and ResiliencyOverview.................................................................................................................. 60Cluster ...................................................................................................................... 61Path redundancy..................................................................................................... 62High Availability through VPLEX Witness ........................................................ 66Leveraging ALUA .................................................................................................. 71

EMC VPLEX Product Guide 3

Page 4: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Contents

Recovery .................................................................................................................. 73Performance monitoring features ........................................................................ 75VPLEX security features........................................................................................ 77

Chapter 5 VPLEX Use CasesTechnology refresh................................................................................................. 80Data mobility .......................................................................................................... 83Redundancy with RecoverPoint .......................................................................... 85Distributed data collaboration ............................................................................. 97VPLEX Metro HA in a campus .......................................................................... 100

EMC VPLEX Product Guide4

Page 5: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Title Page

Figures

1 High availability infrastructure example .......................................................................... 162 Distributed data collaboration example ............................................................................ 183 VPLEX offerings .................................................................................................................... 194 Architecture highlights......................................................................................................... 215 High level VPLEX Witness architecture ............................................................................ 246 Quad-engine VS2 VPLEX cluster........................................................................................ 307 Dual-engine VS2 VPLEX cluster ......................................................................................... 318 Single-engine VS2 VPLEX cluster ....................................................................................... 329 Engine, rear view................................................................................................................... 3410 VPLEX cluster independent power zones ......................................................................... 3711 Local mirrored volumes ....................................................................................................... 3912 Using the GUI to claim storage ........................................................................................... 4813 Local consistency group with global visibility ................................................................. 5114 Distributed devices ............................................................................................................... 5215 Data mobility ......................................................................................................................... 5416 Port redundancy.................................................................................................................... 6217 Director redundancy............................................................................................................. 6318 Recommended fabric assignments for front-end and back-end ports .......................... 6419 Engine redundancy............................................................................................................... 6420 Site redundancy..................................................................................................................... 6521 VPLEX failure recovery scenarios in VPLEX Metro configurations.............................. 6722 Failures in the presence of VPLEX Witness ...................................................................... 6823 Implicit ALUA ....................................................................................................................... 7124 Explicit ALUA ....................................................................................................................... 7225 Traditional view of storage arrays...................................................................................... 8026 VPLEX virtualization layer .................................................................................................. 8127 VPLEX technology refresh................................................................................................... 8228 RecoverPoint architecture .................................................................................................... 8529 RecoverPoint configurations ............................................................................................... 8730 VPLEX Local and RecoverPoint CDP ................................................................................ 8931 VPLEX Local and RecoverPoint CLR - remote site is independent VPLEX cluster .... 9032 VPLEX Local and RecoverPoint CLR - remote site is array-based splitter................... 9033 VPLEX Metro and RecoverPoint CDP ............................................................................... 9134 VPLEX Metro and RecoverPoint CLR - remote site is independent VPLEX cluster... 9135 VPLEX Metro and RecoverPoint CLR/CRR - remote site is array-based splitter....... 9236 Shared VPLEX splitter .......................................................................................................... 9337 Shared RecoverPoint RPA cluster....................................................................................... 9338 Replication with VPLEX Local and CLARiiON ............................................................... 9439 Replication with VPLEX Metro and CLARiiON .............................................................. 9540 Support for Site Recovery Manager ................................................................................... 96

EMC VPLEX Product Guide 5

Page 6: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

41 Data shared with global visibility ...................................................................................... 9742 Asynchronous consistency group for distributed data collaboration .......................... 9843 Metro HA Cross Connect solution for VMware .............................................................. 9944 VMware Metro HA without Cross Connect................................................................... 10145 VMware Metro HA with Cross-Connect ........................................................................ 10346 VPLEX Metro HA failure handling.................................................................................. 10447 VPLEX VS1 hardware example: Single-engine configuration ..................................... 10648 VPLEX VS1 hardware example: Dual-engine configuration ....................................... 10749 VPLEX VS1 hardware example: Quad-engine configuration...................................... 10850 VPLEX VS1 Engine components ...................................................................................... 10851 VPLEX VS2 hardware example: Single-engine cluster ................................................. 10952 VPLEX VS2 hardware example: Dual-engine cluster ................................................... 11053 VPLEX VS2 hardware example: Quad-engine cluster .................................................. 11154 VPLEX VS2 engine modules (front view) ....................................................................... 11255 Component IP addresses in Cluster 1.............................................................................. 11356 Component IP addresses in VPLEX Metro or VPLEX Geo Cluster 2 ......................... 11457 Component IP addresses in Cluster 1.............................................................................. 11558 Component IP addresses in VPLEX Metro or VPLEX Geo Cluster 2 ......................... 11659 Ethernet cabling in a VPLEX VS1 quad-engine configuration..................................... 11860 Serial cabling in a VPLEX VS1 quad-engine configuration.......................................... 11961 Fibre Channel cabling in a VPLEX VS1 quad-engine configuration........................... 12062 AC power cabling in a VPLEX VS1 quad-engine configuration ................................. 12163 Ethernet cabling in a VPLEX VS1 dual-engine configuration...................................... 12264 Serial cabling in a VPLEX VS1 dual-engine configuration........................................... 12365 Fibre Channel cabling in a VPLEX VS1 dual-engine configuration............................ 12466 AC power cabling in a VPLEX VS1 dual-engine configuration .................................. 12567 Ethernet cabling in a VPLEX VS1 single-engine configuration ................................... 12668 Serial cabling in a VPLEX VS1 single-engine configuration ........................................ 12669 Fibre Channel cabling in a VPLEX VS1 single-engine configuration ......................... 12670 AC power cabling in a VPLEX VS1 single-engine configuration................................ 12771 Fibre Channel WAN COM connections on VS1 VPLEX hardware ............................ 12772 IP WAN COM connections on VS1 hardware................................................................ 128

EMC VPLEX Product Guide6

Page 7: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Title Page

Tables

1 Document Change History .................................................................................................... 91 Overview of VPLEX features and benefits ........................................................................ 222 Hardware components......................................................................................................... 323 AccessAnywhere capabilities .............................................................................................. 464 Provisioning methods........................................................................................................... 505 Types of data mobility operations ...................................................................................... 836 How VPLEX Metro HA recovers from failure................................................................ 104

EMC VPLEX Product Guide 7

Page 8: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

EMC VPLEX Product Guide8

Page 9: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Preface

As part of an effort to improve and enhance the performance and capabilities of its productline, EMC® from time to time releases revisions of its hardware and software. Therefore, somefunctions described in this document may not be supported by all revisions of the software orhardware currently in use. Your product release notes provide the most up-to-date informationon product features.

If a product does not function properly or does not function as described in this document,please contact your EMC representative.

About this guide This document provides a high level description of the VPLEX™ product andGeoSynchrony™ 5.1 features.

Audience This document is part of the VPLEX system documentation set and introduces theVPLEX Product and its features. The document provides information for customersand prospective customers to understand VPLEX and how it supports their datastorage strategies.

Relateddocumentation

Related documentation (available on EMC Powerlink®) includes:

◆ EMC VPLEX with GeoSynchrony 5.1 and Point Releases Release Notes

◆ Implementation and Planning Best Practices for EMC VPLEX Technical Notes

◆ EMC VPLEX Security Configuration Guide

◆ EMC VPLEX Site Preparation Guide

◆ EMC Best Practices Guide for AC Power Connections in Two-PDP Bays

◆ EMC VPLEX Hardware Installation Guide

◆ EMC VPLEX Configuration Worksheet

◆ EMC VPLEX Configuration Guide

Table 1 Document Change History

GeoSynchronyRelease Changes since previous revision

Release 5.1 Chapter 1 - “New features in this release”Chapter 2 - added reference to thenew VS1 hardware description appendix, updates to“Power failures that cause vault”Chapter 3 - GUI help renamed to EMC Unisphere for VPLEX online helpChapter 5 - New use case: “Redundancy with RecoverPoint”Appendix - New appendix “VS1 Hardware Description” describing the VS1 hardwareoptions.

EMC VPLEX Product Guide 9

Page 10: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Preface

◆ EMC VPLEX Administration Guide

◆ EMC VPLEX CLI Guide

◆ VPLEX Procedure Generator

The VPLEX GUI also provides online help.

For additional information on all VPLEX publications, contact the EMC SalesRepresentative or refer to the EMC Powerlink website at:

http://powerlink.EMC.com

Conventions used inthis guide

EMC uses the following conventions for special notices.

Note: A note presents information that is important, but not hazard related.

Typographical conventions

EMC uses the following type style conventions in this document:

Normal In running text:• Interface elements (for example button names, dialog box names) outside of

procedures• Items that user selects outside of procedures• Names of resources, attributes, pools, Boolean expressions, buttons, DQL

statements, keywords, clauses, environment variables, filenames, functions, menunames, utilities

• URLs, pathnames, filenames, directory names, computer names, links, groups,service keys, file systems, environment variables, notifications

Bold In procedures:• Names of dialog boxes, buttons, icons, menus, fields• Selections from the user interface, including menu items and field entries• Key names• Window namesIn running text:• Command names, daemons, options, programs, processes, notifications, system

calls, man pages, services, applications, utilities, kernels

Italic Used for:• Full publications titles referenced in text• Unique word usage in text

Courier Used for:• System output• Filenames,• Complete paths• Command-line entries• URLs

Courier bold Used for:• User entry• Options in command-line syntax

Courier italic Used for:• Arguments used in examples of command-line syntax• Variables in examples of screen or file output• Variables in pathnames

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

EMC VPLEX Product Guide10

Page 11: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or forinformation about EMC products, licensing, and service, go to the EMC Powerlinkwebsite (registration required) at:

http://Powerlink.EMC.com

Technical support — For technical support, go to EMC Powerlink. To open a case,you must be a customer. Information about your site configuration and thecircumstances under which the problem occurred is required.

Your comments Your suggestions will help us continue to improve the accuracy, organization, andoverall quality of the user publications. Please send your opinion of this document to:

[email protected] you have issues, comments or questions about specific information or procedures,please include the title and, if available, the part number, the revision (for example,-01), the page numbers, and any other details that will help us locate the subject youare addressing.

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

EMC VPLEX Product Guide 11

Page 12: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Preface

EMC VPLEX Product Guide12

Page 13: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

1

This chapter provides an overview of the EMC VPLEX product family and coversseveral key features of the VPLEX system. Topics include:

◆ VPLEX overview ........................................................................................................... 14◆ Mobility........................................................................................................................... 15◆ Availability..................................................................................................................... 16◆ Collaboration ................................................................................................................. 18◆ VPLEX product offerings ............................................................................................. 19◆ Robust high availability with VPLEX Witness ......................................................... 23◆ Upgrade paths ............................................................................................................... 25◆ VPLEX management interfaces................................................................................... 26◆ New features in this release......................................................................................... 27

Introducing VPLEX

Introducing VPLEX 13

Page 14: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

VPLEX overviewEMC VPLEX is unique virtual storage technology that federates data located onmultiple storage systems – EMC and non-EMC – allowing the storage resources inmultiple data centers to be pooled together and accessed anywhere. When combinedwith virtual servers, it is a critical enabler of private and hybrid cloud computing andthe delivery of IT as a flexible, efficient, and reliable resilient service.

The VPLEX family addresses three primary IT needs:

◆ Mobility: The ability to move applications and data across different storageinstallations, whether within the same data center, across a campus, within ageographical region - and now, with VPLEX Geo, across even greater distances.

◆ Availability: The ability to create high-availability storage infrastructure acrossthese same varied geographies with unmatched resiliency.

◆ Collaboration: The ability to provide efficient real-time data collaboration overdistance for such big data applications as video, geographic/ oceanographicresearch, and others.

All of this can be done within or across data centers, located synchronous orasynchronous distances apart, in a heterogeneous environment.

The VPLEX family brings many unique innovations and advantages:

◆ VPLEX technology enables new models of application and data mobility,leveraging distributed/federated virtual storage. For example, VPLEX isspecifically optimized for virtual server platforms (e.g., VMware ESX, Hyper-V,Oracle Virtual Machine, AIX VIOS) and can streamline and even acceleratetransparent workload relocation over distances, which includes moving virtualmachines over distances.

◆ With its unique, highly available, scale-out clustered architecture, VPLEX can beconfigured with one, two, or four engines - and engines can be added to a VPLEXcluster non-disruptively. All virtual volumes presented by VPLEX are alwaysaccessible from every engine in a VPLEX cluster. Similarly, all physical storageconnected to VPLEX is accessible from every engine in the VPLEX cluster.Combined, this scale-out architecture uniquely ensures maximum availability,fault tolerance, and scalable performance.

◆ Advanced data collaboration, through AccessAnywhere, providescache-consistent active-active access to data across two VPLEX clusters oversynchronous distances with VPLEX Metro and asynchronous distances withVPLEX Geo.

EMC VPLEX Product Guide14

Page 15: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

MobilityApplication and data mobility provides the movement of virtual machines (VM)without downtime.

Storage administrators have the ability to automatically balance loads throughVPLEX, using storage and compute resources from either cluster’s location. Whencombined with server virtualization, VPLEX can transparently move and relocatevirtual machines and their corresponding applications and data over distance. Thisprovides a unique capability to relocate, share, and balance infrastructure resourcesbetween sites, which can be within a campus or between data centers, up to 5 msround trip time (RTT) latency apart with VPLEX Metro, or further apart (50ms RTT)across asynchronous distances with VPLEX Geo.

Mobility 15

Page 16: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

AvailabilityBy providing redundancy, flexibility, and awareness (through VPLEX Witness),GeoSynchrony supports small recovery time objective (RTO) and recovery pointobjective (RPO). Chapter 4, “System Integrity and Resiliency” provides details on theredundancies built into the VPLEX Metro and VPLEX Geo configurations, anddescribes how these configurations handle failures to reduce recovery point objective.All of these features allow the highest resiliency possible in the case of an outage likethe one shown in Figure 1.

Figure 1, shows a VPLEX Metro configuration where storage has become unavailableat one of the cluster sites. Because data is being mirrored using the GeoSynchronyAccessAnywhere feature, both sites access the identical copies of the same data. Atthe point of failure, the applications can continue to function using the back-endstorage at the unaffected site. This is just one example of the resiliency provided inthe VPLEX architecture. VPLEX also supports uninterrupted access even in the eventof port, engine, director, cluster, or inter-cluster link failures as described inChapter 2, “VPLEX VS2 Hardware Overview” and Chapter 4, “System Integrity andResiliency.”

Figure 1 High availability infrastructure example

AccessAnywhere

VPLX-000384

EMC VPLEX Product Guide16

Page 17: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

Figure 1, shows a VPLEX Metro configuration where storage has become unavailableat one of the cluster sites. Because data is being mirrored using the GeoSynchronyAccessAnywhere feature, both sites access the identical copies of the same data. Atthe point of failure, the applications can continue to function using the back-endstorage at the unaffected site. This is just one example of the resiliency provided inthe VPLEX architecture. VPLEX Metro also supports uninterrupted access even in theevent of port, engine, director, cluster, or inter-cluster link failures as described inChapter 2, “VPLEX VS2 Hardware Overview” and Chapter 4, “System Integrity andResiliency.”

Note: Behavior in a VPLEX Geo configuration performing active/active writes differs in itshandling of access during these link failures. Chapter 4, “System Integrity and Resiliency” for adescription of how VPLEX Geo handles cluster and inter-cluster failures.

Availability 17

Page 18: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

CollaborationCollaboration increases utilization of passive data recovery assets and providessimultaneous access to data. Figure 2 shows an example of how you can usedistributed data collaboration.

Figure 2 Distributed data collaboration example

When a workforce has multiple users at different sites who need to work on the samedata and maintain consistency in the dataset, the distributed data collaborationscenario supported by VPLEX provides a solution. A common example would be ageographically separated company where co-development of software requirescollaborative workflows among engineering, graphic arts, video, educationalprograms, design, research, and so forth.

With traditional solutions, when you try to build collaboration across distance, younormally have to save the entire file at one location and then send it to another siteusing FTP. This is slow, can incur heavy bandwidth costs for large files (or even smallfiles that move regularly) and it negatively impacts productivity because the othersites can sit idle while they wait to receive the latest data from another site. If teamsdecide to do their own work independent of each other, then the dataset quicklybecomes inconsistent, as multiple people are working on it at the same time and areunaware of each other’s most recent changes. Bringing all of the changes together inthe end is time-consuming, costly, and grows more complicated as your data-set getslarger.

VPLEX provides a scalable solution for collaboration.

01010

VPLX-000388

10101

EMC VPLEX Product Guide18

Page 19: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

VPLEX product offeringsVPLEX first meets high-availability and data mobility requirements and then scalesup to the I/O throughput you require for the front-end applications and back-endstorage.

The three available VPLEX product offerings are:

◆ VPLEX Local

◆ VPLEX Metro

◆ VPLEX Geo

Figure 3 shows an example of each product offering.

Figure 3 VPLEX offerings

A VPLEX cluster (both VS1 and VS2) consists of a single-engine, dual-engines, orquad-engines and a management server. Each engine contains two directors. Adual-engine or quad-engine cluster also contains a pair of Fibre Channel switches forcommunication between directors and a pair of UPS (Uninterruptible Power Sources)for battery power backup of the Fibre Channel switches and the management server.

The management server has a public Ethernet port, which provides clustermanagement services when connected to the customer network.

VPLEX Local VPLEX Local provides seamless, non-disruptive data mobility and the ability tomanage and mirror data between multiple heterogeneous arrays from a singleinterface within a data center. VPLEX Local consists of a single VPLEX cluster.

VPLEX Local is a next-generation architecture that allows increased availability,simplified management, and improved utilization and availability across multiplearrays.

VPLEX Metro VPLEX Metro enables active/active, block level access to data between two siteswithin synchronous distances. The distance is limited not only by physical distancebut also by host and application requirements. Depending on the application, VPLEXclusters should be installed with inter-cluster links that can support not more than5ms1 round trip delay (RTT)

EMC VPLEX Local

Within a data center AccessAnywhere atsynchronous

distances

AccessAnywhere atasynchronous

distances

VPLX-000389

EMC VPLEX Metro EMC VPLEX Geo

VPLEX product offerings 19

Page 20: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

The combination of virtual storage with VPLEX Metro and virtual servers enables thetransparent movement of virtual machines and storage across synchronous distances.This technology provides improved utilization and availability across heterogeneousarrays and multiple sites.

VPLEX Geo VPLEX Geo enables active/active, block level access to data between two sites withinasynchronous distances. VPLEX Geo enables more cost-effective use of resources andpower.

VPLEX Geo extends the distance for distributed devices up to and within 50ms RTT.As with any asynchronous transport media, you must also consider bandwidth toensure optimal performance. Due to the asynchronous nature of distributed writes,VPLEX Geo has different availability and performance characteristics than Metro.

Architecture highlightsVPLEX with GeoSynchrony is open and heterogeneous, supporting both EMCstorage and arrays from other storage vendors, such as HDS, HP, and IBM. VPLEXconforms to established world wide naming (WWN) guidelines that can be used forzoning.

VPLEX provides storage federation for operating systems and applications thatsupport clustered file systems, including both physical and virtual serverenvironments with VMware ESX and Microsoft Hyper-V. VPLEX supports networkfabrics from Brocade and Cisco.

Refer to the EMC Simple Support Matrix, EMC VPLEX and GeoSynchrony, available athttp://elabnavigator.EMC.com under the Simple Support Matrix tab.

An example of the architecture is shown in Figure 4 on page 21.

1. Refer to VPLEX and vendor-specific White Papers for confirmation of latency limitations.

EMC VPLEX Product Guide20

Page 21: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

Figure 4 Architecture highlights

VPLX-000383

HP, SUN, Microsoft, LINUX, IBM Oracle, Vmware, Microsoft

HP, Sun, Hitachi, 3PAR, IBM, EMC

Brocade,Cisco

Brocade,Cisco

EMC VPLEX

VM VMVM VM VMVM

VPLEX product offerings 21

Page 22: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

Table 1 on page 22 lists an overview of VPLEX features along with the benefits.

For all VPLEX products, GeoSynchrony:

◆ Presents storage volumes from back-end arrays to VPLEX engines

◆ Federates the storage volumes into hierarchies of VPLEX virtual volumes withuser-defined configuration and protection levels

◆ Presents virtual volumes to production hosts in the SAN via the VPLEX front-end

◆ For VPLEX Metro and VPLEX Geo products, presents a global, block-leveldirectory for distributed cache and I/O between VPLEX clusters

Location and distance determine high-availability and data mobility requirements.

When back-end storage arrays or application hosts span two data centers, theAccessAnywhere feature in VPLEX Metro or a VPLEX Geo federates storage in anactive/active configuration between VPLEX clusters. Choosing between VPLEXMetro or VPLEX Geo depends on distance, availability, and data synchronicityrequirements.

Application and back-end storage I/O throughput, along with availabilityrequirements determine the number of engines in each VPLEX cluster.High-availability features within the VPLEX cluster allow for non-disruptivesoftware upgrades and hardware expansion as I/O throughput increases.

Table 1 Overview of VPLEX features and benefits

Features Benefits

Mobility Migration: Move data and applications without impact on users.Virtual Storage federation: Achieve transparent mobility and access in adata center and between data centers.Scale-out cluster architecture: Start small and grow larger with predictableservice levels.

Availability Resiliency: Mirror across arrays within a single data center or between datacenters without host impact. This increases availability for criticalapplications.Distributed Cache Coherency: Automate sharing, balancing, and failover ofI/O across the cluster and between clusters whenever possible.Advanced data caching: Improve I/O performance and reduce storage arraycontention.

Collaboration Distributed Cache Coherency: Automate sharing, balancing, and failover ofI/O across the cluster and between clusters whenever possible.

EMC VPLEX Product Guide22

Page 23: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

Robust high availability with VPLEX WitnessVPLEX uses rule sets to define how a failure should be handled in a VPLEX Metro orVPLEX Geo configuration. If two clusters lose contact or if one cluster fails, the ruleset defines which cluster continues operation and which suspends I/O. This works inmany cases of link failure or cluster failure. However, there are still cases in which allI/O must be suspended resulting in a data unavailability. VPLEX with GeoSynchronyis introduces the new functionality of VPLEX Witness. VPLEX Metro combined withVPLEX Witness provides the following features:

◆ High availability for applications in a VPLEX Metro configuration leveragingsynchronous consistency groups (no single points of storage failure)

◆ Fully automatic failure handling of synchronous consistency groups in a VPLEXMetro configuration (provided these consistency groups are configured with aspecific preference)

◆ Better resource utilization

When VPLEX Witness is deployed with a VPLEX Geo system, it can be used fordiagnostic purposes but it does not automate any fail-over decisions for asynchronousconsistency groups.

Typically data centers implement highly available designs within a data center, anddeploy disaster recovery functionality between data centers. Traditionally, within thedata center, components operate in active/active mode (or active/passive withautomatic failover). However, between data centers, legacy replication technologiesuse active/passive techniques and require manual failover to use the passivecomponent.

When using VPLEX Metro active/active replication technology in conjunction withVPLEX Witness, the lines between local high availability and long-distance disasterrecovery are somewhat blurred because high availability is stretched beyond the datacenter walls. Because the idea of replication is a by-product of federated anddistributed storage disaster avoidance, it is achievable within these geographicallydispersed high-availability environments.

VPLEX Witness augments the failure handling for distributed virtual volumes placedinto synchronous consistency groups by providing perspective as to the nature of afailure and providing the proper guidance in the event of a cluster failure orinter-cluster link failure.

Note: VPLEX Witness has no effect on failure handling for distributed volumes outside ofconsistency groups or volumes in asynchronous consistency groups. Witness also has no effecton distributed volumes in synchronous consistency groups when the preference rule is set tono-automatic-winner.

See Chapter 4, “System Integrity and Resiliency” for more information on VPLEXWitness including the differences in how VPLEX Witness handles failures andrecovery.

Figure 5 on page 24 shows a high level architecture of VPLEX Witness. The VPLEXWitness server must reside in a failure domain separate from Cluster 1 and Cluster 2.

Note: The VPLEX Witness server supports round trip time latency of 1 second over themanagement IP network.

Robust high availability with VPLEX Witness 23

Page 24: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

Figure 5 High level VPLEX Witness architecture

Because the VPLEX Witness server resides in a separate failure domain to both of theVPLEX clusters, it can gain more perspective as to the nature of a failure and providecorrect guidance. It is this perspective that is vital to distinguishing between a siteoutage and a link outage because either one of these scenarios requires VPLEX to takea different action.

Cluster 1

Failure Domain #1

Failure Domain #3

VPLEXWitness

IP management Network

Inter-clusterNetwork A

Inter-clusterNetwork B

Failure Domain #2

VPLX-000474

Cluster 2

EMC VPLEX Product Guide24

Page 25: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

Upgrade pathsVPLEX facilitates application and storage upgrades without a disruption.

This flexibility means that VPLEX is always servicing I/O and never has to becompletely shut down.

Storage,application, andhost upgrades

The mobility features of VPLEX enable the easy addition or removal of storage,applications and hosts. When VPLEX encapsulates back-end storage, the block-levelnature of the coherent cache allows the upgrade of storage, applications, and hosts.You can configure VPLEX so that all devices within VPLEX have uniform access to allstorage blocks.

Hardware upgrades When capacity demands increase in a data center, VPLEX supports hardwareupgrades for single-engine VPLEX systems to dual-engine and dual-engine toquad-engine systems. These upgrades also increase the availability of front-end andback-end ports in the data center.

Software upgrades VPLEX features a robust non-disruptive upgrade (NDU) technology to upgrade thesoftware on VPLEX engines. Management server software must be upgraded beforerunning the NDU.

The redundancy of ports, paths, directors, and engines in VPLEX means thatGeoSynchrony on a VPLEX Local or VPLEX Metro can be upgraded withoutinterrupting host access to storage. No service window or application disruption isrequired to upgrade VPLEX GeoSynchrony on VPLEX Local or VPLEX Metro. OnVPLEX Geo the upgrade script ensures that the application is active/passive beforeallowing the upgrade.

Simple supportmatrix

EMC publishes storage array interoperability information in a Simple Support Matrixavailable on EMC PowerLink. This information details tested, compatiblecombinations of storage hardware and applications that VPLEX supports. The SimpleSupport Matrix can be located at:

http://Powerlink.EMC.com

Upgrade paths 25

Page 26: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

VPLEX management interfacesGeoSynchrony supports multiple methods of management and monitoring for theVPLEX cluster:

◆ Web-based GUI: for graphical ease of management from a centralized location.

◆ VPLEX CLI: for command line management of clusters.

◆ VPLEX Element Manager API: software developers and other users use the APIto create scripts to run VPLEX CLI commands.

◆ SNMP Support for performance statistics: Supports retrieval ofperformance-related statistics as published in the VPLEX-MIB.mib.

EMC VPLEX Product Guide26

Page 27: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

New features in this releaseRelease 5.1 provides the following new features:

◆ New performance dashboard and CLI-based performance capabilities.

Performance monitoring collects and displays statistics to determine how a portor volume is being used, how much I/O is being processed, CPU usage, and soon. The performance monitoring dashboard provides a customized view into theperformance of your VPLEX system. You decide which aspects of the system'sperformance to view and compare. Alternatively, you can use the CLI to create atoolbox of custom monitors to operate under varying conditions includingdebugging, capacity planning, and workload characterization.

◆ An integrated RecoverPoint splitter.

EMC’s RecoverPoint provides comprehensive data protection by continuousreplication (splitting) of host writes. With RecoverPoint, applications can berecovered to any point in time. Starting with GeoSynchrony 5.1, VPLEX includesan integrated RecoverPoint splitter. With this splitter, VPLEX volumes can havetheir I/O replicated by RecoverPoint Appliances (RPAs) to volumes located inVPLEX, or onto one or more heterogeneous storage arrays.

◆ More robust cache vault trigger mechanisms.

New cache vault trigger mechanisms provide better response to situations wheredata is at risk with less potential for forcing a unnecessary cache vaults.

◆ Battery conditioning.

Battery conditioning verifies the health of the batteries and extends theiroperational life. Each SPS battery in a VPLEX system is automatically conditionedonce a month. In addition to the monthly automatic conditioning cycles, you canmanually request and cancel conditioning cycles.

◆ New call home configuration settings.

Call-home notifications are messages sent automatically from VPLEX to EMCCustomer Service or customer personnel when a serious problem occurs.Call-home notifications enable EMC to pro-actively engage the relevantpersonnel, or use a configured ESRS gateway to resolve the problem.

◆ Re-organization of even message severities

Event messages notify users of changes in conditions under which the system isoperating. Depending on their severity, these messages can also trigger a callhome. The new event message severity levels now more closely reflect the trueoperating conditions of the system.

New features in this release 27

Page 28: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

Introducing VPLEX

EMC VPLEX Product Guide28

Page 29: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

2

This chapter describes the major VPLEX VS2 hardware components including the following topics:

◆ System components ...................................................................................................... 30◆ The VPLEX engine ........................................................................................................ 34◆ The VPLEX director ...................................................................................................... 35◆ VPLEX cluster architecture .......................................................................................... 36◆ VPLEX power supply modules................................................................................... 37◆ Power and Environmental monitoring ...................................................................... 38◆ VPLEX component failures.......................................................................................... 39

VPLEX VS2 HardwareOverview

VPLEX VS2 Hardware Overview 29

Page 30: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

System components

Note: This chapter describes only the VS2 hardware for VPLEX clusters. If you are currentlyrunning on a VS1 system, see Appendix A, “VS1 Hardware Description.”

Figure 6 shows the main hardware components in a quad-engine VPLEX cluster.Figure 7 on page 31 shows the main hardware components in a dual-engine VPLEXcluster. Figure 8 on page 32 shows the main hardware components in a single-engineVPLEX cluster.

Figure 6 Quad-engine VS2 VPLEX cluster

VPLX-000352

SPS 1

Engine 1

SPS 3

Engine 3

SPS 2

Engine 2

UPS A

Fibre Channel COM switch A

UPS B

Fibre Channel COM switch B

SPS 4

Engine 4

Management server

Note:SPS = Standby Power SupplyUPS = Uninterruptible Power Supply

EMC VPLEX Product Guide30

Page 31: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

Figure 7 Dual-engine VS2 VPLEX cluster

VPLX-000353

SPS 1

Engine 1

SPS 2

Engine 2

UPS A

Fibre Channel COM switch A

UPS B

Fibre Channel COM switch B

Management server

Note:SPS = Standby Power SupplyUPS = Uninterruptible Power Supply

System components 31

Page 32: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

Figure 8 Single-engine VS2 VPLEX cluster

Table 2 describes the major components and their functions.

VPLX-000354

SPS 1

Engine 1

Management server

Note:SPS = Standby Power Supply

Table 2 Hardware components

Feature Description

Engine Contains two directors, with each providing front-end and back-end I/Oconnections.

Director Contains:• Five I/O modules (IOMs), as identified in Figure 9 on page 34• Management module, for intra-cluster communication• Two redundant 400 W power supplies with built-in fans• CPU• Solid-state disk (SSD) that contains the GeoSynchrony operating environment• RAM

Management server Provides:• Management interface to a public IP network• Management interfaces to other VPLEX components in the cluster• Event logging service

EMC VPLEX Product Guide32

Page 33: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

Fibre Channel COMswitches (Dual-engine orquad-engine cluster only)

Provides intra-cluster communication support among the directors. (This isseparate from the storage I/O.)

Power subsystem Power distribution panels (PDPs) connect to the site’s AC power source, andtransfer power to the VPLEX components through power distribution units (PDUs).This provides a centralized power interface and distribution control for the powerinput lines.The PDPs contain manual on/off power switches for their power receptacles.

Standby Power Supply(SPS)

One SPS assembly (two SPS modules) provides backup power to each engine inthe event of an AC power interruption. Each SPS module maintains power for twofive-minute periods of AC loss while the engine shuts down.

Uninterruptible PowerSupply(UPS)(Dual-engine or quad-enginecluster only)

One UPS provides battery backup for Fibre Channel switch A and themanagement server, and a second UPS provides battery backup for FibreChannel switch B. Each UPS module maintains power for two five-minute periodsof AC loss while the engine shuts down.

Table 2 Hardware components

Feature Description

System components 33

Page 34: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

The VPLEX engineThe VPLEX VS2 engine Contains two directors, with each providing front-end andback-end I/O connections. Each of these module types are described in more detail in“The VPLEX director.”

Figure 9 identifies the modules in an engine.

Figure 9 Engine, rear view

VPLX-000229

IOM

B0

- Fr

ont e

nd

IOM

B1

- B

ack

end

IOM

B2

- WA

N C

OM

IOM

B3

- Lo

cal C

OM

IOM

B4

- re

serv

ed

Man

agem

ent m

odul

e B

IOM

A0

- Fr

ont e

nd

IOM

A1

- B

ack

end

IOM

A2

- WA

N C

OM

IOM

A3

- Lo

cal C

OM

IOM

A4

- re

serv

ed

Man

agem

ent m

odul

e A

Director B Director A

Depending on the cluster topology, slots A2 and B2 contain one of the followingI/O modules (IOMs) (both IOMs must be the same type):

Filler module(VPLEX Local only)

10 Gb/sEthernet

8 Gb/sFibre Channel

EMC VPLEX Product Guide34

Page 35: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

The VPLEX directorEach director services host I/O. The director hosts the GeoSynchrony operatingenvironment for such VPLEX functions as I/O request processing, distributed cachemanagement, virtual-to-physical translations, and interaction with storage arrays.

Front-end andback-endconnectivity

Four 8 Gb/s Fibre Channel I/O modules are provided for front-end connectivity, andfour 8 Gb/s ports are provided for back-end connectivity.

The industry-standard Fibre Channel ports connect to host initiators and storagedevices.

WAN connectivity inVPLEX Metro andVPLEX Geo

WAN communication between VPLEX Metro or VPLEX Geo clusters is over FibreChannel (8 Gbps) for VPLEX Metro, or Gigabit Ethernet (10 GbE) for VPLEX Metro orVPLEX Geo.

CAUTION!The inter cluster link carries unencrypted user data. To protect the security of thedata, secure connections are required between clusters.

Directorredundancy

When properly zoned and configured, the front-end and back-end connectionsprovide redundant I/O that can be serviced by any director in the VPLEXconfiguration.

Director redundancy is provided by connecting ports in dedicated Fibre Channel I/Omodules to an internal Fibre Channel network. Directors within an engine aredirectly connected through an internal communication channel, directors areconnected between engines through dual Fibre Channel switches. Through thisnetwork, each VPLEX director participates in intra-cluster communications.

The VPLEX director 35

Page 36: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

VPLEX cluster architectureThe distributed VPLEX hardware components are connected through both Ethernetor Fibre Channel cabling and respective switching hardware.

I/O modules provide front-end and back-end connectivity between SANs and toremote VPLEX clusters in VPLEX Metro or VPLEX Geo configurations.

Management server The management server in each VPLEX cluster provides management services thatare accessible from a public IP network.

The management server coordinates event notification, data collection, VPLEXsoftware upgrades, configuration interfaces, diagnostics, and somedirector-to-director communication. The management server also forwards VPLEXWitness traffic between directors in the local cluster and the remote VPLEX Witnessserver.

Both clusters in either VPLEX Metro or VPLEX Geo configuration can be managedfrom a single management server.

The management server is on a dedicated, internal management IP network thatprovides accessibility for all major components in the cluster. The management serverprovides redundant internal management network IP interfaces. In addition to theinternal management IP network, each management server connects to the publicnetwork, which serves as the access point.

Fibre Channelswitches

The Fibre Channel switches provide high availability and redundant connectivitybetween directors and engines in a dual-engine or quad-engine cluster. Each FibreChannel switch is powered by a UPS, and has redundant I/O ports for intra-clustercommunication.

The Fibre Channel switches do not connect to the front-end hosts or back-end storage.

EMC VPLEX Product Guide36

Page 37: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

VPLEX power supply modulesTwo independent power zones in a data center feed each VPLEX cluster, providing ahighly available power distribution system. To assure fault tolerant power in thecabinet system, external AC power must be supplied from independent powerdistribution units (PDUs) at the customer site, as shown in Figure 10.

Figure 10 VPLEX cluster independent power zones

The PDPs contain manual on/off power switches for their power receptacles. Foradditional information on power requirements, see the EMC Best Practices Guide forAC Power Connections in Two-PDP Bays.

The power supply module is a FRU and can be replaced with no disruption to theservices provided only one power supply module is replaced at a time.

Standby PowerSupplies

Each engine is connected to two standby power supplies (SPS) that provide batterybackup to each director to ride through transient site power failure as well as toprovide sufficient time to vault their cache in case power is not restored within 30seconds. A single standby power supply provides enough power for the attachedengine to ride through two back-to-back 5-minute losses of power. Refer to “VPLEXdistributed cache protection and redundancy” in Chapter 4, “System Integrity andResiliency.”

Uninterruptedpowersupplies

In the event of a power failure, in dual- and quad-engine clusters, the managementserver and Fibre Channel switch A draw power from UPS-A. UPS-B provides batterybackup for Fibre Channel switch B. In this way, Fibre Channel switches and themanagement server in multi-engine configurations can continue operation for twoback-to-back 5-minute losses of power.

Circuit breakers - Numbers

...

27

28

29

30

Circuit breakers - Numbers

...

8

9

10

11

Labels on customerpower lines

Cabinet serial number

Circuit breakeroff (0)

Circuit breakeroff (0)

PDPs

9

2

28

1

Power Zone Lower A Lower B

PDU#

Panel#

CB#(s)

PDU#

Panel#

CB#(s)

PDU 1CB 28 PDU 2

CB 9

Powerzone B(black)

Powerzone A(gray)

CustomerPDU 1 Customer

PDU 2

EMC cabinet, rear

VPLEX power supply modules 37

Page 38: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

Power and Environmental monitoringA GeoSynchrony service performs the overall health monitoring of the VPLEX clusterand provides environmental monitoring for the VPLEX cluster hardware. It monitorsvarious power and environmental conditions at regular intervals and logs anycondition changes into the VPLEX messaging system.

Any condition that indicates a hardware or power fault generates a call home event tonotify the user.

EMC VPLEX Product Guide38

Page 39: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

VPLEX component failuresAll critical processing components of a VPLEX system use at a minimum pair-wiseredundancy to maximize data availability. This section describes how VPLEXcomponent failures are handled and the best practices that should be used to allowapplications to tolerate these failures.

All component failures that occur within a VPLEX system are reported throughevents that call back to the EMC Service Center to ensure timely response and repairof these fault conditions.

Storage arrayoutages

To overcome both planned and unplanned storage array outages, VPLEX supportsthe ability to mirror the data of a virtual volume between two or more storagevolumes using a RAID 1 device. Figure 11 shows a virtual volume that is mirroredbetween two arrays. Should one array experience an outage, either planned orunplanned, the VPLEX system can continue processing I/O on the surviving mirrorleg. Upon restoration of the failed storage volume, VPLEX synchronizes the datafrom the surviving volume to the recovered leg.

Figure 11 Local mirrored volumes

Best practices forlocal mirrored

volumes

◆ For critical data, it is recommended to mirror data onto two or more storagevolumes that are provided by separate arrays.

◆ For the best performance, these storage volumes should be configured identicallyand be provided by the same type of array.

Fibre Channel portfailures

The small form-factor pluggable (SFP) transceivers that are used for connectivity toVPLEX are serviceable Field Replaceable Units (FRUs).

Best practices forFibre Channel ports

Follow these best practices to ensure the highest reliability of your configuration:

Front end:

VPLX-000428

VPLEX

Vooollummme Vooollummme

Mirroreedd DDeviiceredRRAID 1 MMiMir

VPLEX component failures 39

Page 40: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

◆ Ensure there is a path from each host to at least one front-end port on director Aand at least one front-end port on director B. When the VPLEX cluster has two ormore engines, ensure that the host has at least one A-side path on one engine andat least one B-side on a separate engine. For maximum availability, each host canhave a path to at least one front-end port on every director.

◆ Use multi-pathing software on the host servers to ensure timely response andcontinuous I/O in the presence of path failures.

◆ Ensure that each host has a path to each virtual volume through each fabric.

◆ Ensure that the fabric zoning provides hosts redundant access to the VPLEXfront-end ports.

Back end:

◆ Ensure that the LUN mapping and masking for each storage volume presentedfrom a storage array to VPLEX presents the volumes out of at least two ports fromthe array on at least two different fabrics from different controllers.

◆ Ensure that the LUN connects to at least two different back end ports of eachdirector within a VPLEX cluster.

◆ Active/passive arrays must have one active and one passive port zoned to eachdirector, and zoning must provide VPLEX with the redundant access to the arrayports.

◆ Configure a maximum of eight paths between one director and one LUN (twodirectors can each have eight paths to a LUN).

Note: On VS2 hardware, only 4 physical ports are available for back end connections oneach director. Refer to the VPLEX Configuration Guide for details on the hardwareconfiguration you are using.

I/O module failure I/O modules within VPLEX serve dedicated roles. In VS2, each VPLEX director hasone front-end I/O module, one back-end I/O module, and one COM I/O moduleused for intra- and inter-cluster connectivity. Each I/O module is a serviceable FRU.The following sections describe the behavior of the system.

Front end I/O module Should a front end I/O module fail, all paths connected to this I/O module fail andVPLEX will call home. The “Best practices for Fibre Channel ports” on page 39should be followed to ensure that hosts have a redundant path to their data.

During the removal and replacement of an I/O module, the affected director will bereset.

Back end I/O module Should a back end I/O module fail, all paths connected to this I/O module fail andVPLEX will call home. The “Best practices for Fibre Channel ports” on page 39should be followed to ensure that each director has a redundant path to each storagevolume through a separate I/O module.

During the removal and replacement of an I/O module, the affected director resets.

COM I/O module Should the local COM I/O module of a director fail, the director resets and all serviceprovided from the director stops. The “Best practices for Fibre Channel ports” onpage 39 ensure that each host has redundant access to its virtual storage throughmultiple directors, so the reset of a single director will not cause the host to loseaccess to its storage.

During the removal and replacement of a local I/O module, the affected director willbe reset.

EMC VPLEX Product Guide40

Page 41: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

Director failure A director failure causes the loss of all service from that director. Each VPLEX Enginehas a pair of directors for redundancy. VPLEX clusters containing two or moreengines benefit from the additional redundancy provided by the additional directors.Each director within a cluster is capable of presenting the same storage. The “Bestpractices for Fibre Channel ports” on page 39 allow a host to ride through directorfailures by placing redundant paths to their virtual storage through ports providedby different directors. The combination of multipathing software on the hosts andredundant paths through different directors of the VPLEX system allows the host toride through the loss of a director.

Each director is a serviceable FRU.

Intra-cluster IPmanagementnetwork failure

Each VPLEX cluster has a pair of private local IP subnets that connect the directors tothe management server. These subnets are used for management traffic, protectionagainst intra-cluster partitioning, and communication between the VPLEX Witnessserver (if it is deployed) and the directors. Link loss on one of these subnets can resultin the inability of some subnet members to communicate with other members on thatsubnet; this results in no loss of service or manageability due to the presence of theredundant subnet, though it might result in loss of connectivity between this directorand VPLEX Witness.

Intra-cluster FibreChannel switchfailure

Each VPLEX cluster with two or more engines uses a pair of dedicated Fibre Channelswitches for intra-cluster communication between the directors within the cluster.Two redundant Fibre Channel fabrics are created with each switch serving a differentfabric. The loss of a single Fibre Channel switch results in no loss of processing orservice.

Inter-cluster WANlinks

In VPLEX Metro and VPLEX Geo configurations the clusters are connected throughWAN links that you provide. Follow these best practices when setting up yourVPLEX clusters.

Best practices forinter-cluster WAN links

Follow these best practices when setting up your VPLEX clusters:

◆ For VPLEX Metro configurations, latency must be less than 5ms round trip time(RTT).

◆ For VPLEX Geo configurations, latency must be less than 50ms RTT.

◆ Links must support a minimum of 45Mb/s of bandwidth. However, the requiredbandwidth is dependent on the I/O pattern and must be high enough for allwrites to all distributed volumes to be exchanged between clusters.

◆ The switches used to connect the WAN links between both clusters should beconfigured with a battery backup UPS.

◆ Use physically independent WAN links for redundancy.

◆ Every WAN port on every director must be able to connect to a WAN port onevery director in the other cluster.

◆ Logically isolate VPLEX Metro or VPLEX Geo traffic from other WAN trafficusing VSANs or LSANs.

◆ Independent inter switch links (ISLs) are strongly recommended for redundancy.

◆ Use VPLEX Witness in an independent failure domain to improve availability ofthe VPLEX Metro solution.

VPLEX component failures 41

Page 42: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

Power supplyfailures

Each VPLEX cluster provides two zones of AC power. If one zone loses power, themodules in the cluster can continue to run using power from the other zone. Whenpower is lost in both zones, the engines revert to power from their SPS modules. Inmulti-engine clusters the management server and intra cluster Fibre Channelswitches revert to the power supplied by the UPS.

Standby powersupply failure

Each SPS is a FRU and can be replaced with no disruption to the services provided bythe system. The recharge time for an SPS is up to 5.5 hours and the batteries in thestandby power supply are capable of supporting two sequential outages of no greaterthan 5 minutes without data loss.

Each UPS is a FRU and can be replaced with no disruption to the services providedby the system. The UPS modules provide up to two sequential 5 minute periods ofbattery backup power to the Fibre Channel switches in a multi-engine cluster. Thebatteries require a 6 hour recharge time for 90% capacity.

Note: While the batteries can support two 5-minute power losses, the VPLEX Local, VPLEXMetro, or VPLEX Geo cluster vaults after a 30 second power loss to ensure enough batterypower to complete the cache vault.

Power failures thatcause vault

Vaulting is evolving rapidly with each release of GeoSynchrony. The events and/orconditions that trigger cache vaulting vary depending by release as follows:

Release 5.0.1:

◆ Vaulting is introduced.

◆ On all configurations, vaulting is triggered if all following conditions are present:

• AC power is lost (due to power failure, faulty hardware, or power supply isnot present) in power zone A from engine X,

• AC power is lost (due to power failure, faulty hardware, or power supply isnot present) in power zone B from engine Y,

(X and Y would be the same in a single engine configuration but they may ormay not be the same in dual or quad engine configurations.)

• Both conditions persist for more than 30 seconds.

Release 5.0.1 Patch:

◆ On all configurations, vaulting is triggered if all following conditions are present:

• AC power is lost (due to power failure or faulty hardware) in power zone Afrom engine X,

• AC power is lost (due to power failure or faulty hardware) in power zone Bfrom engine Y,

(X and Y would be the same in a single engine configuration but they may ormay not be the same in dual or quad engine configurations.)

• Both conditions persist for more than 30 seconds.

Release 5.1:

◆ In a VPLEX Geo configuration with asynchronous consistency groups, vaulting istriggered if all following conditions are present:

• AC power is lost (due to power failure or faulty hardware) or becomes“unknown” in a director from engine X,

EMC VPLEX Product Guide42

Page 43: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

• AC power is lost (due to power failure or faulty hardware) or becomes“unknown” in director from engine Y

(X and Y would be the same in a single engine configuration but they may ormay not be the same in dual or quad engine configurations.)

• Both conditions persist for more than 30 seconds.

◆ In a VPLEX Local or VPLEX Metro configuration, vaulting is triggered if allfollowing conditions are present:

• AC power is lost (due to power failure or faulty hardware) or becomes“unknown” in the minimum number of directors required for the cluster to beoperational.

• Condition persist for more than 30 seconds.

Note: UPS power conditions do not trigger any vaulting.

VPLEX Witnessfailure

If VPLEX Witness is deployed, failure of the VPLEX Witness has no impact on I/O aslong as the two clusters stay connected with each other. However, if a cluster failureor inter-cluster network partition happens while VPLEX Witness is down, there willbe data unavailability on all surviving clusters. The best practice in this situation is todisable VPLEX Witness (while the clusters are still connected) if its outage is expectedto be long, and to revert to using preconfigured detach rules. Once VPLEX Witnessrecovers, it can be re-enabled again with the cluster-witness enable CLI command.Refer to the EMC VPLEX CLI Guide for information about these commands.

VPLEXmanagementserver failure

Each VPLEX cluster has a dedicated management server that provides managementaccess to the directors and supports management connectivity for remote access tothe peer cluster in a VPLEX Metro or VPLEX Geo environment. As the I/Oprocessing of the VPLEX directors does not depend upon the management servers, inmost cases the loss of a management server does not interrupt the I/O processing andvirtualization services provided by VPLEX. However, VPLEX Witness traffic is sentthrough the Management Server. If the Management Server fails in a configurationrunning the VPLEX Witness, the VPLEX Witness is no longer able to communicatewith the cluster. Should the remote VPLEX cluster fail, data becomes unavailable. Ifthe inter-cluster network partitions, the remote cluster always proceeds with I/Oregardless of preference because it is still connected to the Witness1.

If the failure of the Management Server is expected to be long, it may be desirable todisable VPLEX Witness functionality while the clusters are still connected. Refer tothe EMC VPLEX CLI Guide for information about the commands used to disable andenable the VPLEX Witness.

1. This description only applies to synchronous consistency groups with a rule setting that identifies a specific preference.

VPLEX component failures 43

Page 44: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX VS2 Hardware Overview

EMC VPLEX Product Guide44

Page 45: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

3

This chapter provides information on various components in the VPLEX software.

◆ GeoSynchrony................................................................................................................ 46◆ Management of VPLEX ................................................................................................ 48◆ Provisioning ................................................................................................................... 50◆ Data mobility ................................................................................................................. 54◆ Mirroring ........................................................................................................................ 55◆ Consistency groups....................................................................................................... 56◆ Cache vaulting ............................................................................................................... 58

VPLEX Software

VPLEX Software 45

Page 46: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

GeoSynchronyGeoSynchrony is the operating system running on VPLEX directors. GeoSynchrony isan intelligent, multitasking, locality-aware operating environment that controls thedata flow for virtual storage. GeoSynchrony is:

◆ Optimized for mobility, availability, and collaboration

◆ Designed for highly available, robust operation in geographically distributedenvironments

◆ Driven by real-time I/O operations

◆ Intelligent about locality of access

◆ Provides the global directory that supports AccessAnywhere

GeoSynchrony supports your mobility, availability and collaboration needs.

Table 3 AccessAnywhere capabilities

VirtualizationCapability Provides the following

Storage volumeencapsulation

LUNs on a back-end array can be imported into an instance of VPLEX and used whilekeeping their data intact.

Considerations: The storage volume retains the existing data on the device andleverages the media protection and device characteristics of the back-end LUN.

RAID 0 VPLEX devices can be aggregated to create a RAID 0 striped device.

Considerations: Improves performance by striping I/Os across LUNs.

RAID-C VPLEX devices can be concatenated to form a new larger device.

Considerations: Provides a means of creating a larger device by combining two or moresmaller devices.

RAID 1 VPLEX devices can be mirrored within a site.

Considerations: Withstands a device failure within the mirrored pair.A device rebuild is a simple copy from the remaining device to the newly repaired device.Rebuilds are done in incremental fashion, whenever possible.The number of required devices is twice the amount required to store data (actual storagecapacity of a mirrored array is 50%).The RAID 1 devices can come from different back-end array LUNs providing the ability totolerate the failure of a back-end array.

Distributed RAID 1 VPLEX devices can be mirrored between sites.

Considerations: Provides protection from site disasters and supports the ability to movedata between geographically separate locations.

Extents Storage volumes can be broken into extents and devices created from these extents.

Considerations: Used when LUNs from a back-end storage array are larger than thedesired LUN size for a host. This provides a convenient means of allocating what isneeded while taking advantage of the dynamic thin allocation capabilities of the back-endarray.

Migration Volumes can be migrated non-disruptively to other storage systems.

EMC VPLEX Product Guide46

Page 47: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Considerations: Use for changing the quality of service of a volume or for performingtechnology refresh operations.

Global Visibility The presentation of a volume from one VPLEX cluster where the physical storage for thevolume is provided by a remote VPLEX cluster.

Considerations: Use for AccessAnywhere collaboration between locations. The clusterwithout local storage for the volume will use its local cache to service I/O but non-cachedoperations incur remote latencies to write or read the data.

Table 3 AccessAnywhere capabilities

VirtualizationCapability Provides the following

GeoSynchrony 47

Page 48: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Management of VPLEXWithin the VPLEX cluster, TCP/IP-based management traffic travels through aprivate management network to the components in one or more clusters. In VPLEXMetro and VPLEX Geo, VPLEX establishes a VPN tunnel between the managementservers of both clusters.

Web-based GUI VPLEX includes a Web-based graphical user interface (GUI) for management. TheEMC Unisphere for VPLEX online help provides more information on using thisinterface.

Figure 12 Using the GUI to claim storage

To perform other VPLEX operations that are not available in the GUI, refer to the CLI,which supports full functionality.

VPLEX CLI VPLEX CLI is a command line interface (CLI) to configure and operate VPLEXsystems. The CLI is divided into command contexts. Some commands are accessiblefrom all contexts, and are referred to as global commands. The remaining commandsare arranged in a hierarchical context tree that can only be executed from theappropriate location in the context tree. Example 1 shows a CLI session that performsthe same tasks as shown in Figure 12.

Example 1 Find unclaimed storage volumes, claim them as thin storage, and assign names from aCLARiiON hints file:

VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claimingwizard --file /home/service/clar.txt--thin-rebuild

Found unclaimed storage-volumeVPD83T3:6006016091c50e004f57534d0c17e011 vendor DGC :claiming and naming clar_LUN82.

EMC VPLEX Product Guide48

Page 49: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Found unclaimed storage-volumeVPD83T3:6006016091c50e005157534d0c17e011 vendor DGC :claiming and naming clar_LUN84.

Claimed 2 storage-volumes in storage array clar

Claimed 2 storage-volumes in total.

VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>

The EMC VPLEX CLI Guide provides a comprehensive list of VPLEX commands anddetailed instructions on using those commands.

SNMP support forperformancestatistics

The VPLEX snmpv2c SNMP agent provides performance statistics as follows:

◆ Supports retrieval of performance-related statistics as published in theVPLEX-MIB.mib.

◆ Runs on the management server and fetches performance related data fromindividual directors using a firmware-specific interface.

◆ Provides SNMP MIB data for directors for the local cluster only.

LDAP / AD Support VPLEX offers Lightweight Directory Access Protocol (LDAP) or Active Directory asan authentication directory service.

VPLEX ElementManager API

VPLEX Element Manager API uses the Representational State Transfer (REST)software architecture for distributed systems such as the World Wide Web. It allowssoftware developers and other users to use the API to create scripts to run VPLEX CLIcommands.

The VPLEX Element Manager API supports all VPLEX CLI commands that can beexecuted from the root context on a director.

Call home The Call Home feature in GeoSynchrony is a leading technology that alerts EMCsupport personnel of warnings in VPLEX so they can arrange for proactive remote oron-site service. Certain events trigger the Call Home feature. If the same event on thesame component occurs repeatedly, a call-home is generated for the first instance ofthe event, and not again for 8 hours.

Management of VPLEX 49

Page 50: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

ProvisioningVPLEX allows easy storage provisioning among heterogeneous storage arrays. Aftera storage array LUN volume is encapsulated within VPLEX, all of its block-levelstorage is available in a global directory and coherent cache. Any front-end devicethat is zoned properly can access the storage blocks.

Table 4 describes the methods available for provisioning.

Thick and thinstorage volumes

Thin provisioning optimizes the efficiency with which available storage space is usedin the network. Unlike traditional (thick) provisioning where storage space isallocated beyond the current requirement in anticipation of a growing need, thinprovisioning allocates disk storage capacity only as the application needs it — when itwrites. Thinly provisioned volumes are expanded dynamically depending on theamount of data written to them, and they do not consume physical space untilwritten to.

VPLEX automatically discovers storage arrays that are connected to its back-endports. By default, VPLEX treats all storage volumes as if they were thicklyprovisioned on the array.

Storage volumes that are thinly provisioned on the array should be claimed with thethin-rebuild parameter in VPLEX.This provides thin to thin copies in VPLEX usinga different type of rebuild. Unlike a traditional rebuild that copies all the data fromthe source to the target, in this case, VPLEX first reads the storage volume, and if thetarget is thinly provisioned, it does not write unallocated blocks to the target. Writingunallocated blocks to the target would result in VPLEX converting a thin target tothick, eliminating the efficiency of the thin volume.

About extents An extent is a portion of a disk. The ability to provision extents allows you to break astorage volume into smaller pieces. This feature is useful when LUNs from aback-end storage array are larger than the desired LUN size for a host. Extentsprovide a convenient means of allocating what is needed while taking advantage ofthe dynamic thin allocation capabilities of the back-end array. Extents can then becombined into devices.

About devices Devices combine extents or other devices into one large device with specific RAIDtechniques such as mirroring or striping. Devices can only be created from extents orother devices. A device's storage capacity is not available until you create a virtualvolume on the device and export that virtual volume to a host. You can create onlyone virtual volume per device.

There are two types of devices:

◆ A simple device is configured by using one component — an extent.

Table 4 Provisioning methods

EZ provisioning EZ provisioning capitalizes on a Create Virtual Volumes wizard that claims storage,creates extents, devices, and then virtual volumes on those devices. EZ provisioninguses the entire capacity of the selected storage volume to create a device, and thencreates a virtual volume on top of the device.

Advanced provisioning Advanced provisioning allows you to slice storage volumes into portions or extents.Extents are then available to create devices, and then virtual volumes on these devices.Advanced provisioning requires manual configuration of each of the provisioning stepsand the method is useful for tasks such as creating complex devices.

EMC VPLEX Product Guide50

Page 51: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

◆ A complex device has more than one component, combined by using a specificRAID type. The components can be extents or other devices (both simple andcomplex).

Device visibility You can create a virtual volume from extents at one VPLEX Metro cluster that areavailable for I/O at the other VPLEX Metro cluster. This is done by making the virtualvolume, or the consistency group containing the volume, globally visible. A virtualvolume on a top-level device that has global visibility can be exported in storageviews on any cluster in a VPLEX Metro. Consistency groups aggregate volumestogether to ensure the common application of a set of properties to the entire group.Figure 13 shows a local consistency group with global visibility.

Remote virtual volumes suspend I/O during inter-cluster link outages. As a result,the availability of the data on remote virtual volumes is directly related to thereliability of the inter-cluster link.

Figure 13 Local consistency group with global visibility1

Distributed devices Distributed devices are present at both clusters for simultaneous active/activeread/write access using AccessAnywhere to ensure consistency of the data betweenthe clusters. Each distributed virtual volume looks to the hosts as if it is a centralizedsingle volume served by a single array located in a centralized location; except that

Cluster - 1

Virtual Volume

Storage

w A

Cluster - 2

w A

w

A

VPLX-000372

1 6

2

5

43

1. In this figure, W indicates the write path and A indicates the acknowledgement path.

Provisioning 51

Page 52: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

everything is distributed and nothing is centralized. Because distributed devices areconfigured using storage from both clusters, they are used only in a VPLEX Metro orVPLEX Geo configuration. Figure 14 shows distributed devices.

Figure 14 Distributed devices

Virtual volumes A virtual volume is created on a device or a distributed device, and is presented to ahost through a storage view. Virtual volumes are created on top-level devices only,and always use the full capacity of the device or distributed device.

You can non-disruptively expand a virtual volume up to 945 times as necessary.However, expanding virtual volumes created on distributed RAID-1 devices is notsupported in the current release.

Logging volumes During initial system setup, GeoSynchrony requires you to create logging volumes(sometimes referred to as dirty region logging volumes or DRLs) for VPLEX Metroand VPLEX Geo configurations to keep track of any blocks changed during a loss ofconnectivity between clusters. After an inter-cluster link is restored or when a peercluster recovers, VPLEX uses the resultant bitmap on the logging volume tosynchronize distributed devices by sending only the contents of the changed blocksover the inter-cluster link. I/O to the distributed volume is allowed in both clusterswhile the resynchronization works in the background.

Back-end loadbalancing

VPLEX uses all paths to a LUN in a round robin fashion thus balancing the loadacross all paths.

Site A

VPLX-000433

Site B

Distributed Virtual Volume

Fibre Channel

EMC VPLEX Product Guide52

Page 53: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Slower storage hardware can be dedicated for less frequently accessed data andoptimized hardware can be dedicated to applications that require the highest storageresponse.

Provisioning 53

Page 54: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Data mobilityData mobility allows you to non-disruptively move data between extents or devices.

Extent migrations move data between extents in the same cluster. Use extentmigrations to:

◆ Move extents from a “hot” storage volume shared by other busy extents

◆ Defragment a storage volume to create more contiguous free space

◆ Migrate data between dissimilar arrays

Device migrations move data between devices (RAID 0, RAID 1, or RAID C devicesbuilt on extents or on other devices) on the same cluster or between devices ondifferent clusters. Use device migrations to:

◆ Migrate data between dissimilar arrays

◆ Relocate a hot volume to a faster array

◆ Relocate devices to new arrays in a different cluster

Figure 15 shows an example of data mobility.

Figure 15 Data mobility

Mobility

VPLX-000429

EMC VPLEX Product Guide54

Page 55: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

MirroringMirroring is the writing of data to two or more disks simultaneously. If one of the diskdrives fails, the system automatically leverages one of the other disks without losingdata or service. RAID 1 provides mirroring. VPLEX manages mirroring in thefollowing ways:

◆ Local mirroring on VPLEX Local for RAID 1 virtual volumes within a data center

◆ Distributed RAID 1 volumes on VPLEX Metro and VPLEX Geo configurationsacross data centers

Mirroring is supported between heterogenous storage platforms.

Local mirroring VPLEX RAID 1 devices provide a local full-copy RAID 1 mirror of a deviceindependent of the host and operating system, application, and database. Thismirroring capability allows VPLEX to transparently protect applications fromback-end storage array failure and maintenance operations.

RAID 1 data is mirrored using at least two extents to duplicate the data. Readperformance is improved because either extent can be read at the same time. Writingto RAID-1 devices requires one write to each extent in the mirror. Use RAID-1 forapplications that require high fault tolerance.

Remote mirroring VPLEX Metro and VPLEX Geo support distributed mirroring that protects the data ofa virtual volume by mirroring it between the two VPLEX clusters. There are twotypes of caching used for consistency groups:

◆ Write-through caching

◆ Write-back caching

Distributed volumeswith write-throughcaching

Write-through caching performs a write to back-end storage in both clusters beforeacknowledging the write to the host. Write-through caching maintains a real-timesynchronized mirror of a virtual volume between the two clusters of the VPLEXsystem providing an RPO of zero data loss and concurrent access to the volumethrough either cluster. This form of caching is performed on VPLEX Localconfigurations and on VPLEX Metro configurations. Write-through caching is knownas synchronous cache mode in the VPLEX user interface.

Distributed volumeswith write-backcaching

In write-back caching, a director processing the write, stores the data in its cache andalso protects it at another director in the local cluster before acknowledging the writeto the host. At a later time, the data is written to back end storage. Because the writeto back-end storage is not performed immediately, the data in cache is known as dirtydata. Write-back caching provides a RPO that could be as short as a few seconds. Thistype of caching is performed on VPLEX Geo configurations, where the latency isgreater than 5ms. Write-back caching is known as asynchronous cache mode in theVPLEX user interface.

Mirroring 55

Page 56: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Consistency groupsConsistency groups aggregate volumes together to ensure the common application of aset of properties to the entire group. Create consistency groups for sets of volumesthat require the same I/O behavior in the event of a link failure. In the event of adirector, cluster, or inter-cluster link failure, consistency groups prevent possible datacorruption.

There are two types of consistency groups:

Synchronous consistency groups — provide a convenient way to apply rule sets andother properties to a group of volumes in a VPLEX Local or VPLEX Metroconfiguration, simplifying system configuration and administration on large systems.Volumes in a synchronous group have global or local visibility. A synchronousconsistency group contains either local or distributed volumes. It cannot contain amixture of local and distributed volumes. Synchronous consistency groups usewrite-through caching (known as synchronous cache mode in the VPLEX userinterface) and are supported on clusters separated by 5ms of latency or less.Synchronous means that VPLEX sends writes to the back-end storage volumes, andacknowledges the writes to the application as soon as the back-end storage volumesin both clusters acknowledge the writes. You can configure up to 1024 synchronousconsistency groups. Each synchronous consistency group can contain up to 1000virtual volumes. The optional VPLEX Witness failure recovery semantics apply onlyto volumes in synchronous consistency groups and only if a rule identifying specificpreference is configured.

Asynchronous consistency groups — used for distributed volumes in a VPLEX Geo,separated by up to 50ms of latency. All volumes in an asynchronous consistencygroup share the same detach rule and cache mode, and behave the same way in theevent of an inter-cluster link failure. Detach rules define how each cluster shouldproceed in the event of an inter-cluster link failure or cluster link failure. Onlydistributed volumes can be included in an asynchronous consistency group.Asynchronous consistency groups use write-back caching (known as asynchronouscache mode in the VPLEX user interface). This means the director caches each writeand then protects the write on a second director in the same cluster. Writes areacknowledged to the host once the write to both directors is complete. These writesare then grouped into deltas. Deltas are exchanged and combined between bothclusters before the data is committed to back end storage. Writes to the virtualvolumes in an asynchronous consistency group are ordered such that all the writes ina given delta are written before writes from the next delta. However, the writeswithin an individual data are not ordered. Therefore, if access to the back end array islost while the system is writing a delta, the data on disk is no longer consistent andrequires automatic recovery when access is restored. Asynchronous cache mode cangive better performance, but there is a higher risk that data will be lost if:

◆ Multiple directors fail at the same time

◆ There is an inter-cluster link partition and both clusters are actively writing andinstead of waiting for the link to be restored, the user chooses to accept a datarollback in order to reduce the RTO

◆ The cluster that is actively writing fails

VPLEX supports a maximum of 16 asynchronous consistency groups. Eachasynchronous consistency group can contain up to 1000 virtual volumes.

EMC VPLEX Product Guide56

Page 57: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Detach rule Detach rules are predefined rules that determine which cluster continues I/O duringan inter-cluster link failure or cluster failure. In these situations, until communicationis restored, most I/O workloads require specific sets of virtual volumes to resume onone cluster and remain suspended on the other cluster unless the no-active-winnerrule is used, in which case both clusters will suspend.

In the event of connectivity loss with the remote cluster, the detach rule defined foreach consistency group identifies a preferred cluster (if there is one) that can resumeI/O to the volumes in the consistency group. In a VPLEX Metro configuration, I/Oproceeds on the preferred cluster and is suspended on the non-preferred cluster. In aVPLEX Geo configuration, I/O proceeds on the active cluster only when the remotecluster has no dirty data in cache.

Refer to the “Consistency Groups” chapter in the VPLEX CLI Guide for the availabledetach rules for synchronous and asynchronous consistency groups.

Active and passiveclusters

Asynchronous consistency groups have the active-cluster-wins rule. When using theactive-cluster-wins rule, I/O continues at the cluster where the application wasactively writing last (provided there was only one such cluster). The active cluster isthe preferred cluster.

An active cluster has data in cache that has yet to be written to the back-end storage.This data is referred to as dirty data. A passive cluster refers to a cluster that has nodirty data in its cache. If both clusters were active during the failure, I/O mustsuspend at both clusters. I/O suspends because the cache image is inconsistent on bothclusters and must be rolled back to a point where both clusters had a consistent image tocontinue I/O. Application restart is required after roll back. If both clusters werepassive and have no dirty data at the time of the failure, the cluster that was the lastactive one (before it became passive) will proceed with I/O after failure. Regardlessof the detach rules in Asynchronous consistency groups, as long as the remote clusterhas dirty data, the local cluster suspends I/O if it observes loss of connectivity withthe remote cluster regardless of preference. This is done to allow the administrator forthe application time to stop or restart the application prior to exposing the applicationto the rolled back, time consistent, data image. It might also be possible for you torecover the inter-cluster link and recover without having to perform a rollback.

Note: VPLEX Witness has no bearing on the failover semantics of asynchronous consistencygroups. VPLEX Witness still provides its guidance (which can be used for diagnostic purposeslater on) but it does not affect actual failover.

The GUI shows which clusters are active in an asynchronous consistency group, orwhich clusters have data in cache that has not been written to the back-end array. Italso provides information on the state of the cluster and if you need to run a recoverycommand to enable access to the consistency group. You can also query these statesthrough the CLI.

Consistency groups 57

Page 58: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Software

Cache vaultingCache Vaulting is necessary in VPLEX Geo configurations to safeguard the dirtycache data under emergency conditions. Dirty cache pages are pages in a director'smemory that have not been written to back-end storage but were acknowledged tothe host. Dirty cache pages also include the copies protected on a second director inthe cluster. These pages must be preserved in the presence of power outages to avoidloss of data already acknowledged to the host.

Although there is no dirty cache data in VPLEX Local or VPLEX Metroconfigurations, vaulting is still necessary to quiesce all I/O when data is at risk due topower failure. This is done to minimize the risk of metadata corruption. However, in

When the system recovers, VPLEX can unvault this vaulted data, avoiding any dataloss.

For information on distributed cache protection, refer to Chapter 4, “System Integrityand Resiliency.” For information on conditions that cause a vault see Chapter 2,“VPLEX VS2 Hardware Overview.”

Vaulting can be used in two scenarios:

◆ Data at risk due to power failure: VPLEX monitors all components that providepower to the VPLEX cluster. If it detects AC power loss that would put data atrisk, it takes a conservative approach and initiates a cluster wide vault if thepower loss exceeds 30 seconds. See Chapter 2, page 2-29 for details of theconditions that cause Vaulting.

Note: Power failure of the UPS (in dual and quad engine configurations) does notcurrently trigger any vaulting actions on power failure.

◆ Manual emergency cluster shutdown: When unforseen circumstances require anunplanned and immediate shutdown, it is known as an emergency clustershutdown. You can use a CLI command to manually start vaulting if an emergencyshutdown is required.

WARNING

When performing maintenance activities on a VPLEX Geo system, service personnelmust not remove the power in one or more engines that would result in the powerloss of both directors unless both directors in those engines have been shutdown andare no longer monitoring power. Failure to do so, will lead to data unavailability inthe affected cluster. To avoid unintended vaults, always follow official maintenanceprocedures.

For information on the redundant and backup power supplies in VPLEX, refer toChapter 2, “VPLEX VS2 Hardware Overview.”

For information on handling cache vaulting, see the VPLEX CLI Administration Guide.

EMC VPLEX Product Guide58

Page 59: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

4

VPLEX provides numerous high availability and redundancy features for servicingI/O. The following features allow robust system integrity, and resiliency.

◆ Overview ........................................................................................................................ 60◆ Cluster............................................................................................................................. 61◆ Path redundancy ........................................................................................................... 62◆ High Availability through VPLEX Witness .............................................................. 66◆ Leveraging ALUA......................................................................................................... 71◆ Recovery ......................................................................................................................... 73◆ Performance monitoring features ............................................................................... 75

System Integrity andResiliency

System Integrity and Resiliency 59

Page 60: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

OverviewVPLEX clusters are capable of surviving any single hardware failure in anysubsystem within the overall storage cluster. These include the host connectivity andmemory subsystems. A single failure in any subsystem does not affect the availabilityor integrity of the data. Multiple failures in a single subsystem and certaincombinations of single failures in multiple subsystems might affect the availability orintegrity of data.

VPLEX features fault tolerance for devices and hardware components to continueoperation as long as one device or component survives. This highly available androbust architecture can sustain multiple device and component failures whileservicing storage I/O.

VPLEX configurations continue to service I/O in the following classes of faults andservice events:

◆ Unplanned and planned storage outages

◆ SAN outages

◆ VPLEX component failures

◆ VPLEX cluster failures

◆ Data center outages

This availability requires that you create redundant host connections and supplyhosts with multi path drivers. In the event of a front-end port failure or a directorfailure, hosts without redundant physical connectivity to a VPLEX cluster andwithout multipathing software installed could be susceptible to data unavailability.

EMC VPLEX Product Guide60

Page 61: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

ClusterA cluster is a collection of one, two, or four engines in a physical cabinet. A clusterserves I/O for one site.

All hardware resources (CPU cycles, I/O ports, and cache memory) are pooled.

Configurations of two clusters in a VPLEX Metro or VPLEX Geo topology providehigher resilience against a site-wide outage.

Cluster 61

Page 62: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Path redundancyThe following sections discuss the resilience produced by multiple paths. Theyinclude examples of the following paths:

◆ Path redundancy through different ports

◆ Path redundancy through different directors

◆ Path redundancy through different engines

◆ Path redundancy through site distribution

Different ports The front-end ports on all directors can provide access to any virtual volume in thecluster. Including multiple front end ports in each storage view protects against portfailures. When a director port goes down for any reason, the host multipathingsoftware will seamlessly fail over to another path through a different port, as shownin Figure 16.

Figure 16 Port redundancy

Multi-pathing software plus redundant volume presentation yields continuous dataavailability in the presence of port failures.

Similar redundancies on back-end ports, local COM ports and WAN COM portsprovide additional resilience in the event of failures on these ports.

Virtual Volume

Director A1 Director B1

VPLX-000376

Engine 1

EMC VPLEX Product Guide62

Page 63: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Different directors If a director were to go down, the other director can completely take over the I/Oprocessing from the host, as shown in Figure 17.

Figure 17 Director redundancy

Multi-pathing software plus volume presentation on different directors yieldscontinuous data availability in the presence of director failures.

Each director can service I/O for any other director in the cluster due to theredundant nature of the global directory and cache coherency.

Best practices For maximum availability, present virtual volumes through each director so that alldirectors but one can fail without causing data loss or unavailability. Connect alldirectors to all storage. To have continuous I/O during a non-disruptive upgrade ofVPLEX, it is critical to have a path through an A director and a path through a Bdirector.

If a director loses access to a specific storage volume but other directors at the samecluster have access to that volume, VPLEX can forward back end I/O to anotherdirector that still has access. This condition is known as asymmetric back end visibility.When this happens, VPLEX is considered in a degraded state. It is not able to providefor high availability and operations such as NDU are prevented. This type ofasymmetric back end visibility can also have a performance impact.

When a pair of redundant Fibre Channel fabrics is used with VPLEX, VPLEXdirectors should be connected to each fabric both for the front-end (host-side)connectivity, as well as for the back-end (storage array side) connectivity. Thisdeployment, along with the isolation of the fabrics, allows the VPLEX system to ridethrough failures that take out an entire fabric, and allows the system to providecontinuous access to data through this type of fault.

Virtual Volume

Director B1

VPLX-000392

Engine 1

Director A1

Path redundancy 63

Page 64: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Hosts must be connected to both fabrics and use multi-pathing software to ensurecontinuous data access in the presence of such failures.

It is recommended that I/O modules be connected to redundant fabrics.

Figure 18 Recommended fabric assignments for front-end and back-end ports

Different engines In a dual- or quad-engine environments on VPLEX Metro, if one engine goes down,another engine completes the host I/O processing, as shown in Figure 19.

Figure 19 Engine redundancy

In VPLEX Geo, directors in the same engine serve as protection targets for each other.If a single director in an engine goes down, the remaining director uses anotherdirector in the cluster as its protection pair. Simultaneously losing an engine in anactive cluster, though very rare. could result in DLFM. However, the loss of 2directors in different engines can be handled as long as other directors can serve asprotection targets for the failed director. For more information about DLFM, seeChapter 3, “VPLEX Software.”

Multi-pathing software plus volume presentation on different engines yieldscontinuous data availability in the presence of engine failures on VPLEX Metro.

FAB-B FAB-A

VPLX-000432

Virtual Volume

Director B1 Director A1 Director B1

VPLX-000393

Engine 2Engine 1

Director A1

EMC VPLEX Product Guide64

Page 65: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Site distribution VPLEX Metro ensures that if a data center goes down, or even if the link to that datacenter goes down, the other site can continue processing the host I/O, as shown inFigure 20. On site failure of Data Center B, the I/O continues unhindered in DataCenter A.

Figure 20 Site redundancy

Optionally, you can install the VPLEX Witness on a server in a separate failuredomain to provide further fault tolerance in VPLEX Metro configurations. See See“High Availability through VPLEX Witness” on page 66 for more information.

Virtual Volume

Director A1 Director B1

Engine 1

Virtual Volume

VPLX-000394

Engine 1

Cluster file system

Data center A Data center B

Director A1 Director B1

Path redundancy 65

Page 66: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

High Availability through VPLEX WitnessVPLEX GeoSynchrony systems running in a VPLEX Metro or VPLEX Geoconfiguration can now rely on an optional component called VPLEX Witness. VPLEXWitness is designed to be deployed in customer environments where the regulardetach rule sets alone provide insufficient recovery time objective fail-over in thepresence of VPLEX cluster failures or inter cluster link partitions.

◆ In aVPLEX Metro configuration, VPLEX Witness provides seamless zero RTOfail-over for synchronous consistency groups (from a storage perspective) in thepresence of these failures.

◆ In a VPLEX Geo configuration, VPLEX Witness can be useful for diagnosticpurposes.

CAUTION!VPLEX Witness does not automate any fail-over decisions for asynchronousconsistency groups.

VPLEX Witness connects to both VPLEX clusters over the management IP network.By reconciling its own observations with the information reported periodically by theclusters, the VPLEX Witness enables the clusters to distinguish between inter-clusternetwork partition failures and cluster failures and automatically resume I/O in thesesituations.

VPLEX Witnessinstallationconsiderations

The responsibility of VPLEX Witness is to provide improved availability in yourstorage network. For this reason, you should carefully consider how you installVPLEX Witness with your VPLEX Metro clusters.

WARNING

VPLEX Witness must be deployed in a failure domain independent from either ofVPLEX clusters. If this requirement cannot be met, the VPLEX Witness should not beinstalled.

An external VPLEX Witness is deployed as a virtual machine running on a customersupplied VMware ESX server deployed in a failure domain separate from either ofthe VPLEX clusters (to eliminate the possibility of a single fault affecting both thecluster and the VPLEX Witness), and protected by a firewall. The VPLEX Witnesssoftware is also loaded as a client on each of the clusters of a VPLEX Metro or VPLEXGeo configuration. Each of the clusters in this configuration should also reside inseparate failure domains from each other. A failure domain is a set of entities effectedby the same set of faults. The scope of the failure domain depends on the set of faultscenarios that must be tolerated in a given environment. For example, if the twoclusters of a VPLEX Metro configuration are deployed on two different floors of thesame data center, deploy the VPLEX Witness on a separate floor. On the other hand, ifthe two clusters of a VPLEX Metro configuration are deployed in two different datacenters, deploy the VPLEX Witness in the third data center.

VPLEX Metro failurewithout VPLEXWitness

Without VPLEX Witness, all VPLEX Metro synchronous consistency groups and alldistributed volumes rely on configured rule sets to identify the preferred cluster inthe presence of cluster partition or cluster failure. However, if the preferred clusterhappens to fail (in the result of a disaster event, or similar condition), VPLEX Metro isunable to automatically fail-over and allow the non-preferred cluster to continue I/O

EMC VPLEX Product Guide66

Page 67: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

to the affected distributed volumes. VPLEX Witness has been designed specifically tosolve this problem for synchronous consistency groups configured with a specificpreference rule set1.

In all of the scenarios shown in Figure 21, the synchronous consistency group isconfigured with rule set that designates Cluster 1 as preferred and allows it tocontinue if there is a failure in one of the clusters or in the inter-cluster link.

Figure 21 VPLEX failure recovery scenarios in VPLEX Metro configurations

In Scenario 1, the inter cluster link goes down. In this case, Cluster 1 continues I/Oand Cluster 2 suspends I/O. This enables Cluster 1 to continue service (to avoid dataunavailability) and Cluster 2 to suspend to avoid split brain. Once thecommunication link is restored, both clusters can continue I/O while Cluster 2updates its data to match that of Cluster 1 in the background.

In Scenario 2, Cluster 2 fails. Because the rule set was configured to allow Cluster 1 tocontinue I/O anyway, the rule set is effective in this case.

In Scenario 3, Cluster 1, the preferred cluster fails. The rule set is configured tosuspend I/O at Cluster 2 in the event of a cluster failure. Without VPLEX Witness,VPLEX has no automatic way of recovering from this failure and it suspends I/O atthe only operating cluster. In some cases, the failed cluster could recover, in whichcase recovery is actually automatic. Otherwise, manual intervention is required tore-enable I/O on Cluster 2. This is the scenario that VPLEX Witness solves for VPLEXMetro configurations.

Note: VPLEX Witness has no impact on distributed volumes in synchronous consistencygroups configured with the no-automatic-winner rule. In that case, manual intervention isrequired in the presence of any failure scenario described above.

Cluster failures inthe presence ofVPLEX Witness

Note: The discussion in this section assumes that VPLEX Witness is enabled in yourconfiguration. If VPLEX Witness is disabled, it has no impact on the fail-over semantics.

1. If a synchronous consistency group is configured with the no-winner rule, each cluster suspends if it loses contact with its peercluster regardless of whether VPLEX Witness is deployed or not.

Scenario 1

Cluster 1 Cluster 2

Scenario 2 Scenario 3

Cluster 1 Cluster 2 Cluster 1 Cluster 2

VPLX-000434

High Availability through VPLEX Witness 67

Page 68: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Figure 22 shows four failure types that could occur.

Figure 22 Failures in the presence of VPLEX Witness

Note: In this figure, the terms local and remote are from the perspective of Cluster 1.

Local Cluster Isolation occurs when the local cluster loses contact with both theremote cluster and VPLEX Witness. If the example shown is a VPLEX Metroconfiguration, Cluster 1 (the local cluster) unilaterally suspends I/O and the VPLEXWitness guides Cluster 2 to continue I/O, unless the synchronous consistency groupis configured with the no-automatic-winner rule set. VPLEX Witness has no impacton synchronous consistency groups configured with the no-automatic-winner ruleset. If the example shown is a VPLEX Geo configuration with asynchronousconsistency groups, the remote cluster disregards the guidance of VPLEX Witnessand failover is handled according to the configured rule set.

In the Remote Cluster Isolation scenario, the local cluster (Cluster 1) has lost contactwith the remote cluster and with the VPLEX Witness. However, the Cluster 1 still hasaccess to the VPLEX Witness. If the example shown is a VPLEX Metro configuration,Cluster 1 continues I/O as it is still in contact with the VPLEX Witness. Cluster 2suspends I/O. VPLEX Witness has no impact on synchronous consistency groupsconfigured with the no-automatic-winner rule set. If the example shown is a VPLEXGeo configuration with asynchronous consistency groups and there is no data loss,VPLEX disregards the guidance of VPLEX Witness and failover is handled accordingto the configured rule set.

In the case of an Inter-Cluster Partition where both clusters lose contact with eachother but still have access to the VPLEX Witness the action taken by the clustersdepends on the type of VPLEX configuration and the detach rule configured.

◆ In a VPLEX Metro configuration with synchronous consistency groups and anyother rule set, I/O continues on the preferred cluster.

◆ In a VPLEX Metro with synchronous consistency groups or VPLEX Geoconfiguration with asynchronous consistency group, if the preferred cluster cannot proceed because it has not fully synchronized, the cluster suspends I/O.

I/O Contin

uesI/O Suspends

Local Cluster Isolation

Cluster 2

VPLEX Witness

Cluster 1

I/O SuspendsI/O Continues

Remote Cluster Isolation

Cluster 2

VPLEX Witness

Cluster 1

Inter Cluster Partition

Cluster 2

VPLEX Witness

Cluster 1

I/O Contin

uesI/O Continues

Loss of Contact with VPLEX Witness

Cluster 2

VPLX-000435

VPLEX Witness

Cluster 1

EMC VPLEX Product Guide68

Page 69: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

◆ In a VPLEX Geo configuration with asynchronous consistency groups and activewriter detach rules configured, I/O continues on the active writer clusterprovided the other cluster has no dirty data and the local storage is not out ofdate. VPLEX disregards the guidance of VPLEX Witness and the failoversemantics.

VPLEX Witness always preserves detach rule semantics in case of inter-clusternetwork partition.

In the scenario Loss of Contact with the VPLEX Witness, if the clusters are still incontact with each other, there is no change in I/O. The cluster that lost connectivitywith VPLEX Witness issues a Call Home.

In the case of an inter-cluster partition, VPLEX Witness preserves the semantics of thedetach rule. In the case of a cluster failure, VPLEX Witness on a VPLEX Metroconfiguration might override the detach rule. Overriding the detach rule results in azero RTO policy for VPLEX Metro.

If the VPLEX Witness fails, both clusters Call Home. As long as both clusters remainconnected with each other, there is no impact on I/O. However, if either of theclusters fails or if the inter-cluster link were to fail while the VPLEX Witness is down,VPLEX experiences data unavailability in all surviving clusters.

Once the VPLEX Witness or its connectivity recovers, you can re-enable it.

When the VPLEX Witness observes a failure and provides its guidance it sticks to thisguidance until both clusters report complete recovery. This is crucial in order to avoidsplit-brain and data corruption. As a result you may have a scenario where Cluster 1becomes isolated and the VPLEX Witness tells Cluster 2 to continue I/O and thenCluster 2 becomes isolated. However, because Cluster 2 has previously receivedguidance to proceed from the VPLEX Witness, it proceeds even while it is isolated. Inthe meantime, if Cluster 1 were to reconnect with the VPLEX Witness server, theVPLEX Witness server tells it to suspend. In this case, because of event timing,Cluster 1 is connected to VPLEX Witness but it is suspended, while Cluster 2 isisolated but it is proceeding.

VPLEX Witness inVPLEX Geoconfigurations

In case of a VPLEX Geo cluster failure, in this release of GeoSynchrony, the VPLEXWitness provides its guidance, but the cluster ignores this guidance for allasynchronous consistency groups. Instead the clusters continues leveraging thepreference rules set up for each asynchronous consistency group. For asynchronousconsistency groups, the surviving cluster might require a data rollback before I/O canproceed. In this situation, the consistency groups continue suspending I/O regardlessof the guidance of Witness and regardless of the pre-configured rule set. Manualintervention is required to force the rollback in order to change (roll back) the currentdata image to the last crash-consistent image preserved on disk. Since this is donewithout the application's knowledge, you will likely need to manually restart theapplication to ensure that it does not continue using the stale data that has beendiscarded by VPLEX (but still remains in the application's cache).

Similar semantics apply in the presence of cluster partition. For each consistencygroup, the VPLEX Witness guides a preferred cluster (because it is the active writer)to continue I/O. However, the cluster ignores this guidance for all asynchronousconsistency groups. Instead the clusters continues leveraging the preference rules setup for each asynchronous consistency group. If data rollback is required, the clustercontinues suspending, waiting for manual intervention.

High Availability through VPLEX Witness 69

Page 70: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Value of VPLEXWitness inVPLEX Geo

The value of the VPLEX Witness is different with VPLEX Geo than it is with VPLEXMetro. With VPLEX Metro, VPLEX Witness provides a zero-RTO and zero-RPOstorage solution in the presence of cluster or inter-cluster connectivity failure. WithVPLEX Geo, VPLEX Witness does not automate any failure scenarios. The datapresented by VPLEX Witness CLI context may be helpful to facilitate the manualfail-over. See the VPLEX CLI Reference for details on the commands used to determinestate of the VPLEX Witness and the clusters attached to the Witness.

Higher availability— VPLEX Metro andVPLEX Witness

The use cases detailed in “VPLEX Metro HA in a campus” on page 100 describe thecombination of VMware and cross cluster connection with configurations thatcontain a VPLEX Witness. Refer to Chapter 5, “VPLEX Use Cases” for moreinformation on the use of VPLEX Witness withVPLEX Metro and VPLEX Geoconfigurations.

EMC VPLEX Product Guide70

Page 71: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Leveraging ALUAAsymmetric Logical Unit Access (ALUA) is a feature provided by many newactive/passive arrays.

In active/passive arrays, logical units or LUNs are normally exposed through severalarray ports on different paths and the characteristics of the paths might be different.ALUA calls these paths characteristics access states.

The most important access states are ALUA active/optimized and ALUAactive/non-optimized.

◆ Active/optimized paths usually provide higher bandwidth thanactive/non-optimized paths.

Active/optimized paths are paths that go to the service processor of the array thatowns the LUN.

◆ I/O that goes to the active/non-optimized ports must be transferred to theservice processor that owns the LUN internally.

This transfer increases latency and has an impact on the array.

VPLEX is able to detect the active/optimized paths and the active/non-optimizedpaths and performs round robin load balancing across all of the active/optimizedpaths. Because VPLEX is aware of the active/optimized paths, it is able to providebetter performance to the LUN.

With implicit ALUA, the array can change its access states without any command fromthe host (the VPLEX back end). If the controller that owns the LUN being accessedfails, the array changes the status of active/non-optimized ports into activeoptimized ports andfails over the LUN from the failed controller to the othercontroller.

When an array initiates the access state change, on the next I/O, it returns a UnitAttention "Asymmetric Access State Changed" to the host.

VPLEX then re-discovers all the paths to get the updated access states.

Figure 23 shows an example of implicit ALUA.

Figure 23 Implicit ALUA

Explicit ALUA is the same as implicit ALUA, except that the array changes its accessstates in response to commands ( Set Target Port Groups ) from the host (the VPLEXbackend).

FAB-B

FAB-A

SPA

SPB

LUN 5

Leveraging ALUA 71

Page 72: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

If the active/optimized path fails, VPLEX causes the active/non-optimized paths tobecome active/optimized paths and as a result, increase the performance. I/O can gobetween the controllers to access the LUN through a very fast bus.

There is no need to failover the LUN in this case.

Figure 24 shows an example of explicit ALUA.

Figure 24 Explicit ALUA

Implicit/explicit ALUA means that both are true. Either the host or the array caninitiate the access state change.

An array can support implicit only, explicit only, or both. VPLEX supports all of these.

FAB-B

FAB-A

SPA

SPB

LUN 5

EMC VPLEX Product Guide72

Page 73: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

RecoveryVPLEX stores configuration and metadata on system volumes created from storagedevices. The two types of system volumes are metadata volumes and loggingvolumes.

Metadata volumefailure

VPLEX maintains its configuration state, referred to as metadata, on storage volumesprovided by storage arrays. Each VPLEX cluster maintains its own metadata, whichdescribes the local configuration information for this cluster as well as anydistributed configuration information shared between clusters.

VPLEX uses this persistent metadata on a full system boot and loads theconfiguration information onto each director. When you make changes to the systemconfiguration, VPLEX writes these changes to the metadata volume. Should VPLEXlose access to the metadata volume, the VPLEX directors continue to provide theirvirtualization services using the in-memory copy of the configuration information.Should the storage supporting the metadata device remain unavailable, configure anew metadata device. Once you assign a new device, VPLEX records its in-memorycopy of the metadata device maintained by the cluster on the new metadata device.

VPLEX suspends the ability to perform configuration changes when access to thepersistent metadata device is not available.

During normal operations and while configuration changes are taking place,metadata volumes experience light I/O activity. During boot operations andnon-disruptive upgrade activities, metadata volumes experience high read I/O.

Best practices formetadata volumes

Follow these best practices to provide the highest resilience in your federated storagearea network:

◆ Allocate storage volumes of 78GB for the metadata volume

◆ Configure the metadata volume for each cluster with multiple back-end storagevolumes provided by different storage arrays of the same type

◆ Use the data protection capabilities provided by these storage arrays, such asRAID 1 and RAID 5 to ensure the integrity of the system's metadata

◆ Create backup copies of the metadata whenever configuration changes are madeto the system

◆ Perform regular backups of the metadata volumes on storage arrays that areseparate from the arrays used for the metadata volume

Dirty region loggingvolumes

In the event of a link outage between clusters, and if you have chosen to continue I/Oto distributed volumes at one of the clusters, the updated pages are synchronizedwith the other cluster after the link is restored. To minimize the traffic across the linkafter an outage, VPLEX maintains a record of which pages are written at each clusterduring the outage so that only the changed pages need be exchanged. This area isreferred to as a dirty region log and is stored on the logging volume. Should thisvolume become inaccessible, the directors record the entire leg as out-of-date andrequire a full synchronization of this leg once it is reattached to the mirror.

During normal operations, when both legs of the RAID 1 volume are active andaccessible, there is no I/O to the logging volume. When a loss of connectivity occursand the distributed mirror is fractured, there is high write I/O activity on the clusterthat continues operation. When the detached leg of the mirror is reattached, VPLEX

Recovery 73

Page 74: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

performs an incremental synchronization where it reads this logging volume todetermine what writes are necessary to synchronize the reattached volume. Duringthat synchronization, the logging volume experiences high read I/O activities.

Best practices forlogging volumes

Use these best practices when configuring logging volumes on a VPLEX Metro orVPLEX Geo configuration.

◆ Create one logging volume for each cluster.

◆ Use the data protection capabilities provided by the storage array, such as RAID 1and RAID 5 to ensure the integrity of the system's logging volume.

◆ Configure at least 1 GB of logging volume space for every 16TB of distributeddevice space. Slightly more space is required if the 16 TB of distributed storage iscomposed of multiple distributed devices because the a small amount ofnon-logging information is also stored for each distributed device.

VPLEX distributedcache protectionand redundancy

VPLEX utilizes the individual director's memory systems to ensure durability of userand critical system configuration data. User data is made durable in one of two waysdepending on the cache mode used for the data.

◆ Write-through cache mode leverages the durability properties of a back-end arrayby writing user data to the array and obtaining an acknowledgement for thewritten data before it acknowledges the write back to the host.

◆ Write-back cache mode ensures data durability by storing user data into the cachememory of the director that received the I/O, then placing a protection copy ofthis data on another director in the local cluster before acknowledging the writeto the host. This ensures the data is protected in two independent memories. Thedata is later destaged to back-end storage arrays that provide the physical storagemedia.

Global distributedcache protection

from power failure

In the event of a data at risk due to power failure lasting longer than 30 seconds in aVPLEX Geo configuration, each VPLEX director copies its dirty cache data to the localsolid state storage devices (SSDs). This process, known as cache vaulting, protects userdata in cache if that data is at risk due to power loss. After each director vaults itsdirty cache pages, VPLEX shuts down the director’s firmware.

When you resume operation of the cluster, if any condition is not safe, the systemdoes not resume normal status and calls home for diagnosis and repair. This allowsEMC Customer Support to communicate with the VPLEX system and restore normalsystem operations.

Under normal conditions, the SPS batteries can support two consecutive vaults; thisensures after the first power failure, that the system can resume I/O, and that it canvault a second time if there is a second power failure.

See the EMC VPLEX Administration Guide for more information about handling cachevaulting.

EMC VPLEX Product Guide74

Page 75: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Performance monitoring featuresPerformance monitoring collects and displays statistics to determine how a port orvolume is being used, how much I/O is being processed, CPU usage, and so on.Performance monitoring is supported in both the VPLEX CLI and GUI, and falls intothree general types:

◆ Current load monitoring allows administrators to watch CPU load duringupgrades, I/O load across the inter-cluster WAN link, and front-end vs. back-endload during data mining or back up.

Both the CLI and GUI support current load monitoring.

◆ Long term load monitoring collects data for capacity planning and loadbalancing.

Both the CLI and GUI support long term load monitoring.

◆ Troubleshooting monitoring helps identify bottlenecks and resource hogs.

The CLI supports monitors created to help pinpoint problems.

Performancemonitoring using theVPLEX GUI

Long term load monitoring collects data for capacity planning and load balancing.Troubleshooting monitoring helps identify bottlenecks and resource hogs.

The performance monitoring dashboard provides a customized view into theperformance of your VPLEX system. You decide which aspects of the system'sperformance to view and compare. Performance information is displayed as a set ofcharts. For additional information about the statistics available through thePerformance Dashboard, see the EMC Unisphere for VPLEX online help available inthe VPLEX GUI.

Performancemonitoring using theVPLEX CLI

You can use the CLI to create a toolbox of custom monitors to operate under varyingconditions including debugging, capacity planning, and workload characterization.

VPLEX collects and displays performance statistics using two user-defined objects:

monitors - Gather the specified statistic from the specified target at the specifiedinterval.

monitor sinks - Direct the output to the desired destination. Monitor sinks includethe console, a file, or a combination of the two.

Note: SNMP statistics do not require a monitor or monitor sink. Use the snmp-agent configurecommand to configure and start the SNMP agent. Refer to the EMC VPLEX AdministrationGuide.

Counters, readings,and buckets

There are three types of statistics:

◆ counters - Monotonically increasing value (analogous to a car’s odometer)

Counters are used to count bytes, operations, and errors. Often reported as a ratesuch as counts/second or KB/second.

◆ readings - Instantaneous value (analogous to a car’s speedometer)

Readings are used to display CPU utilization or memory utilization. Value canchange every sample.

◆ buckets - Histogram of bucketized counts.

Performance monitoring features 75

Page 76: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

Buckets are used to track latencies, determine median, mode, percentiles,minimums, and maximums.

EMC VPLEX Product Guide76

Page 77: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

VPLEX security featuresThe VPLEX management server operating system (OS) is based on a Novell SUSELinux Enterprise Server 10 distribution. The operating system has been configured tomeet EMC security standards by disabling or removing unused services, andprotecting access to network services through a firewall.

Security features include:

◆ Using SSH to access the management server shell

◆ Using HTTPS to access the VPLEX GUI

◆ Using an IPsec VPN in a VPLEX Metro and VPLEX Geo configurations

◆ Using an IPSEC VPN to connect each cluster of a VPLEX Metro or VPLEX Geo tothe VPLEX Witness server

◆ Using SCP to copy files

◆ Using a tunneled VNC connection to access the management server desktop

◆ Separate networks for all VPLEX cluster communication

◆ Defined user accounts and roles

◆ Defined port usage for cluster communication

◆ Network encryption

◆ Certificate Authority (CA) certificate (default expiration 5 years)

◆ Two host certificates (default expiration 2 years)

◆ Third host certificate for optional VPLEX Witness

CAUTION!The inter cluster link carries unencrypted user data. To ensure privacy of the data,establish an encrypted VPN tunnel between the two sites.

VPLEX security features 77

Page 78: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

System Integrity and Resiliency

EMC VPLEX Product Guide78

Page 79: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

5

This section provides examples of VPLEX configurations and how they can be used.

◆ Technology refresh........................................................................................................ 80◆ Data mobility ................................................................................................................. 83◆ Redundancy with RecoverPoint ................................................................................. 85◆ Distributed data collaboration .................................................................................... 97◆ VPLEX Metro HA in a campus ................................................................................. 100

VPLEX Use Cases

VPLEX Use Cases 79

Page 80: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Technology refresh

In current IT environments, servers are often connected to redundant front-endfabrics and storage is connected to redundant back-end fabrics. Current models donot facilitate federation among storage arrays. Migrations between heterogeneousarrays are often complicated and necessitate the purchase of additional software orfunctionality. Integrating heterogeneous arrays in a single environment is difficultand requires a staff with a diverse skill set.

Figure 25 shows the traditional view of storage arrays with servers attached at theredundant front end and storage (Array 1 and Array 2) connected to a redundantfabric at the back end.

Figure 25 Traditional view of storage arrays

VPLEX introduces a virtualization layer to the current IT environment. VPLEX isinserted between the front-end and back-end redundant fabrics. VPLEX appears to bethe target to hosts and the initiator to storage. This allows you to change the back-endstorage transparently. VPLEX makes it easier to integrate heterogeneous storagearrays on the back-end. Migration between storage arrays becomes much simplerwith VPLEX as well.

Server

ServerArray 2

VPLX-000381

Array 1

No Federation

EMC VPLEX Product Guide80

Page 81: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Figure 26 shows the vision of storage when VPLEX is presenting an abstract storageconfiguration.

Figure 26 VPLEX virtualization layer

This abstract view of storage becomes very powerful when it comes time to replacethe physical array that is providing storage to applications. Historically, the data usedby the host was copied to a new volume on the new array and the host wasreconfigured to access the new volume. This process requires downtime for the host.

With VPLEX, because the data is in virtual volumes, it can be copied nondisruptivelyfrom one array to another without any downtime for the host. The host does not needto be reconfigured; the physical data relocation is performed by VPLEX transparentlyand the virtual volumes retain the same identities and the same access points to thehost.

Array 2

VPLX-000395

Array 1

No FederationAC AC AC AC

FE

FE

BE

BE

Technology refresh 81

Page 82: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Figure 27 shows an example of a technology refresh.

Figure 27 VPLEX technology refresh

In Figure 27, the virtual disk is made up of the disks of Array A and Array B. The siteadministrator has determined that Array A has become obsolete and should bereplaced with a new array. Array C is the new storage array. The administrator addsthis array into the VPLEX cluster and using the Mobility Central functionality in theGUI, assigns a target extent from the new array to each extent from the old array andinstructs VPLEX to perform the migration. Copying the data from Array A to Array Coccurs while the host continues its access to the virtual volume without disruption.After the copy of Array A to Array C is complete, the administrator candecommission Array A. Because the virtual machine is addressing its data to theabstracted virtual volume, its data continues to flow to the virtual volume with noneed to change the address of the data store. Although this example uses virtualmachines, the same is true for traditional hosts. Using VPLEX the, administrator canmove data used by an application to a different storage array without the applicationor server being aware of the change.

Array A Array B Array C

VPLX-000380

Virtual Volume

EMC VPLEX Product Guide82

Page 83: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Data mobilityVPLEX provides direct support for data mobility both within and between datacenters near and far and enables application mobility, data center relocation, andconsolidation.

Data mobility is the relocation of data from one location (the source) to another (thetarget), after which the data is subsequently accessed only through the target. Bycontrast, data replication enables applications to continue to access the source dataafter the target copy is created. Similarly, data mobility is different from datamirroring, which transparently maintains multiple copies of the data for the purposesof data protection and access.

During and after a data mobility operation, applications continue to access the datausing its original VPLEX volume identifier. This avoids the need to point applicationsto a new data location or change the configuration of their storage settings, effectivelyeliminating the need for application cut over.

There are many types and reasons for data mobility:

◆ Moving data from one storage device to another. For example, if a device has beendeemed “Hot” the data can be moved to a less utilized storage device.

◆ Moving applications from one storage device to another.

◆ Moving operating system files from one storage device to another.

◆ Consolidating data or database instances.

◆ Moving database instances.

◆ Moving storage infrastructure from one physical location to another.

The non-disruptive nature of VPLEX data mobility operations helps to simplify theplanning and execution factors that would normally be considered when performinga disruptive migration.

It is still important to consider some of these factors, however, when performing datamobility between data centers and increasing the distance between an applicationand its data. Considerations include the business impact and the type of data to bemoved, site locations, and total amount of data, as well as time considerations andschedules.

Mobility with theVPLEX migrationwizard

The VPLEX GUI supports the ability to easily move the physical location of virtualstorage while VPLEX provides continuous access to this storage by the host. Usingthis wizard, you first display and select the extents (for extent mobility) or devices(for device mobility) to move. The wizard then displays a collection of candidatestorage volumes. Once virtual storage is selected, VPLEX automates the process ofmoving the data to its new location. Throughout the process the volume retains itvolume identity, and continuous access is maintained to the data from the host.

There are three types of mobility jobs:

Table 5 Types of data mobility operations

Extent Moves data from one extent to another extent (within a cluster).

Device Moves data from one device to another device (within a cluster).

Batch Groups extent or device mobility jobs into a batch job, which is then executed as asingle job.

Data mobility 83

Page 84: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Best practices for datamobility

Data mobility must be a planned activity.

All components of the system (virtual machine, software, volumes) must be availableand in a running state.

Data mobility can be used for disaster avoidance or planned upgrade or physicalmovement of facilities.

How data mobilityworks

When a mobility job begins, VPLEX creates a temporary RAID 1device above eachdevice or extent to be migrated. The target extent or device becomes a mirror leg ofthe temporary device, and synchronization between the source and the target begins.Once synchronization completes, you can commit (or cancel) the mobility job.

Because the data mobility operation is non-disruptive, the application continues towrite to the volumes during a mobility operation. The new I/Os are written to bothlegs of the device.

The following rules apply to mobility operations:

◆ The source device can have active I/O.

◆ The target device cannot be in use (no virtual volumes created on it).

◆ The target extent/device must be the same size or larger than the sourceextent/device.

◆ The target extent cannot be in use (no devices created on it).

When creating a mobility job, you can control the transfer speed. The higher thespeed, the greater the impact on host I/O. A slower transfer speed results in themobility job taking longer to complete, but has a lower impact on host I/O. You canchange the transfer speed of a job while the job is in the queue or in progress. Thechange takes effect immediately.

In GeoSynchrony 5.0, the thinness of a thinly provisioned storage volume is retainedthrough a mobility operation. By default, VPLEX moves all volumes as thick storagevolumes unless you specify that rebuilds should be thin at the time you provision thethin volume. Refer to the EMC VPLEX CLI Guide or the online help for moreinformation on thin provisioning of volumes.

EMC VPLEX Product Guide84

Page 85: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Redundancy with RecoverPointEMC RecoverPoint provides comprehensive data protection by continuousreplication (splitting) of host writes. With RecoverPoint, applications can berecovered to any point in time. Replicated writes can be written to local volumes toprovide recovery from operational disasters, to remote volumes to provide recoveryfrom site disasters, or both.

RecoverPoint supports 3 types of splitters:

◆ Host OS-based splitters

◆ Intelligent fabric-based splitters (SANTap and Brocade)

◆ Storage-based splitters (CLARiiON CX4, VNX series, and Symmetrix VMAX)

Starting in GeoSynchrony 5.1, VPLEX includes a RecoverPoint splitter. The splitter isbuilt into VPLEX such that VPLEX volumes can have their I/O replicated byRecoverPoint Appliances (RPAs) to volumes located in VPLEX or on one or moreheterogeneous storage arrays.

Note: For GeoSynchrony 5.1, RecoverPoint integration is offered for VPLEX Local and VPLEXMetro configurations (not for Geo).

Figure 28 RecoverPoint architecture

The VPLEX splitter enables VPLEX volumes in a VPLEX Local or VPLEX Metro tomirror I/O to a RecoverPoint Appliance (RPA) performing continuous dataprotection (CDP), continuous remote replication (CRR), or concurrent local andremote data protection (CLR).

Redundancy with RecoverPoint 85

Page 86: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

RecoverPointterminology andconcepts

RPA RecoverPoint Appliance. The hardware that manages all aspects of data protection.One RPA can manage multiple storage groups, each with differing policies.

A minimum of two and a maximum of eight RPAs are installed at each site, located inthe same facility as the host and storage. The set of RPAs installed at each site isreferred to as an “RPA cluster”. If one RPA in a cluster fails, the functions provided bythe failed RPA are automatically moved to one or more of the remaining RPAs.

The RPAs at the production site transfer the split I/O to the replica site.

The RPAs at the replica site distribute the data to the replica storage.

In the event of failover, these roles can be reversed. The same RPA can serve as theproduction RPA for one consistency group and the replica RPA for another.

volumes All RecoverPoint volumes can be hosted on VPLEX. In practice, some volumes maybe hosted on VPLEX and other hosted on non-VPLEX storage. For example, therepository volume for an existing RPA cluster cannot be moved. If you are installingVPLEX into an existing RecoverPoint configuration, the repository volume is alreadyconfigured on non-VPLEX storage.

The following types of volumes are required in all RecoverPoint configurations:

◆ Repository volume - A volume dedicated to RecoverPoint for each RPA cluster.The repository volume serves all RPAs of the particular RPA cluster and thesplitter associated with that cluster. The repository volume stores configurationinformation about the RPAs and RecoverPoint consistency groups.

There is one repository volume per RPA cluster.

◆ Production volumes - Volumes that are written to by the host applications. Writesto production volumes are split such that they are sent to both the normallydesignated volumes and RPAs simultaneously.

Each production volume must be exactly the same size as the replica volume towhich it replicates.

◆ Replica volumes - Volumes to which production volumes replicate. The replicavolume must be exactly the same size as it’s production volume.

◆ Journal volumes - Volumes that contain data waiting to be distributed to targetreplica volumes and copies of the data previously distributed to the targetvolumes. Journal volumes allow convenient rollback to any point in time,enabling instantaneous recovery for application environments.

Snapshot/PIT A point-in-time copy that preserves the state of data at an instant in time, by storingonly those blocks that are different from an already existing full copy of the data.

Snapshots are also referred to as Point In Time (PIT). Snapshots stored at a replicajournal represent the data that has changed on the production storage since theclosing of the previous snapshot.

Image access User operation on a replica journal to enable read/write access to a selected PIT at areplica. There are four image access modes:

◆ Logged (physical) access - Used for production recovery, failover, testing, andcloning a replica.

EMC VPLEX Product Guide86

Page 87: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

◆ Direct access - This access mode can only be enabled after logged access, orvirtual access with roll, are enabled. Used for extensive processing with a highwrite-rate, when image access is needed for a long period of time (and may nothave the journal space to support all of the data written to the image access log inthis time), and when it is not required to save the history in the replica journal(the replica journal is lost after direct access).

◆ Virtual (instant) access - Used for single file recovery or light testing. Used togain access to the replica data immediately, or when I/O performance is notimportant.

◆ Virtual (instant) access with roll - Used for production recovery, failover, orprocessing with a high write-rate. Used when the PIT is far from the current PIT(and would take too long to access in logged access mode).

IMPORTANT!In the current release, virtual (instant) access and virtual (instant) access with rollare not supported by the VPLEX splitter.

Bookmark A label applied to a snapshot so that the snapshot can be explicitly called (identified)during recovery processes (during image access).

Bookmarks are created through the CLI or GUI and can be created manually, by theuser, or automatically, by the system. Bookmarks created automatically can be createdat pre-defined intervals or in response to specific system events. Parallel bookmarksare bookmarks that are created simultaneously across multiple consistency groups.

RecoverPointconfigurations

RecoverPoint supports three replication configurations:

◆ Continuous data protection (CDP)

◆ Continuous remote replication (CRR)

◆ Concurrent local and remote data protection (CLR)

Figure 29 RecoverPoint configurations

Continuous data protection (CDP)

Redundancy with RecoverPoint 87

Page 88: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

In a CDP configuration, RecoverPoint continuously replicates data within the samesite.

Every write is kept in the journal volume, allowing recovery to any point in time. Bydefault, snapshot granularity is set to per second, so the exact data size and contentsare determined by the number of writes made by the host application per second. Ifnecessary, the snapshot granularity can be set to per write.

CDP configurations include:

◆ Standard CDP - all components (splitters, storage, RPAs, and hosts) are located atthe same site.

◆ Stretch CDP - the production host is located at the local site, splitters and storageare located at both the bunker site and the local site, and the RPAs are located atthe bunker site. The repository volume and both the production and local journalsare located at the bunker site.

Continuous remote replication (CRR)

In CRR configurations, data is transferred between a local and a remote site overFibre Channel or a WAN. The RPAs, storage, and splitters are located at both the localand the remote site.

By default, the replication mode is set to asynchronous, and the snapshot granularityis set to dynamic, so the exact data size and contents are determined by the policiesset by the user and system performance. This provides protection to applicationconsistent and other specific points in time.

Synchronous replication is supported when the local and remote sites are connectedusing Fibre Channel.

Concurrent local and remote data protection (CLR)

A combination of both CDP and CRR. In a CLR configuration, RecoverPointreplicates data to both a local and a remote site simultaneously, providing concurrentlocal and remote data protection.

The CDP copy is normally used for operational recovery, while the CRR copy isnormally used for disaster recovery.

RecoverPoint/VPLEX configurationsRecoverPoint can be configured on VPLEX Local or Metro systems as follows:

◆ “VPLEX Local and RecoverPoint CDP”

◆ “VPLEX Local and RecoverPoint CRR/CLR”

◆ “VPLEX Metro and RecoverPoint CDP at one site”

◆ “VPLEX Metro and RecoverPoint CRR/CLR”

In VPLEX Local systems, RecoverPoint can replicate local volumes.

In VPLEX Metro systems, RecoverPoint can replicate local volumes and distributedRAID 1 volumes.

IMPORTANT!In VPLEX Metro systems, RecoverPoint can replicate only one VPLEX cluster.Replication at both VPLEX clusters is not supported.

Virtual volumes can be replicated locally (CDP), remotely (CRR), or both (CLR).

EMC VPLEX Product Guide88

Page 89: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Distances between production sources and replication volumes vary based on therecovery objectives, inter-site bandwidth, latency, and other limitations outlined inthe EMC Simple Support Matrix (ESSM) for RecoverPoint.

VPLEX Local and RecoverPoint CDPIn VPLEX Local/RecoverPoint CDP configurations, I/O is split to replica volumesthat are located at the same site.

RPAs are deployed with the VPLEX cluster.

This configuration supports unlimited points in time, (with granularity up to a singlewrite) for local VPLEX virtual volumes. The CDP replica volume can be a VPLEXvirtual volume or any other heterogeneous storage supported by RecoverPoint.

Application event aware based rollback is supported for Microsoft SQL, MicrosoftExchange and Oracle database applications.

Users can quickly return to any point-in-time, in order to recover from operationaldisasters.

Figure 30 VPLEX Local and RecoverPoint CDP

VPLEX Local and RecoverPoint CRR/CLRIn VPLEX Local/RecoverPoint CRR/CLR configurations, I/O is split to replicavolumes located both at the site where the VPLEX cluster is located and a remote site.

RPAs are deployed at both sites.

If the primary site (the VPLEX cluster site) fails, customers can recover to any point intime at the remote site. Recovery can be automated through integration with MSCEand VMware SRM.

This configuration can simulate a disaster at the primary site to test RecoverPointdisaster recovery features at the remote site.

Application event aware based rollback is supported for Microsoft SQL, MicrosoftExchange and Oracle database applications.

Redundancy with RecoverPoint 89

Page 90: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

The remote site can be an independent VPLEX cluster:.

Figure 31 VPLEX Local and RecoverPoint CLR - remote site is independent VPLEX cluster

or, the remote site can be an array-based splitter:.

Figure 32 VPLEX Local and RecoverPoint CLR - remote site is array-based splitter

VPLEX Metro and RecoverPoint CDP at one siteIn VPLEX Metro/RecoverPoint CDP configurations, I/O is split to replica volumeslocated at only one VPLEX cluster.

EMC VPLEX Product Guide90

Page 91: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

RPAs are deployed at one VPLEX cluster:

Figure 33 VPLEX Metro and RecoverPoint CDP

VPLEX Metro/RecoverPoint CDP configurations support unlimited points in time onVPLEX distributed and local virtual volumes.

Users can quickly return to any point-in-time, in order to recover from operationaldisasters.

VPLEX Metro and RecoverPoint CRR/CLRIn VPLEX Metro/RecoverPoint CRR/CLR configurations, I/O is:

◆ Written to both VPLEX clusters (as part of normal VPLEX operations),

◆ Split on one VPLEX cluster to replica volumes located both at the cluster and at aremote site.

RPAs are deployed at one VPLEX cluster and at a third site.

The third site can be an independent VPLEX cluster:.

Figure 34 VPLEX Metro and RecoverPoint CLR - remote site is independent VPLEX cluster

Redundancy with RecoverPoint 91

Page 92: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

or, the remote site can be an array-based splitter:

Figure 35 VPLEX Metro and RecoverPoint CLR/CRR - remote site is array-based splitter

Although the RecoverPoint splitter is resident in all VPLEX clusters, only one clusterin a VPLEX Metro can have RPAs deployed.

This configuration supports unlimited points in time, (with granularity up to a singlewrite) for local and distributed VPLEX virtual volumes.

◆ RecoverPoint Appliances can be deployed at only one VPLEX cluster in a Metroconfiguration.

◆ All RecoverPoint-protected volumes must be on the preferred cluster, asdesignated by VPLEX consistency group-level detach rules.

◆ Customers can recover from operational disasters by quickly returning to any PITon the VPLEX cluster where the RPAs are deployed or at the third site.

◆ Application event aware based rollback is supported on VPLEX Metrodistributed/local virtual volumes for Microsoft SQL, Microsoft Exchange andOracle database applications.

◆ If the VPLEX cluster fails, customers can recover to any point in time at theremote (third) site. Recovery at remote site to any point in time can be automatedthrough integration with MSCE and VMware Site Recovery Manager (SRM). See“vCenter Site Recovery Manager support for VPLEX” on page 95.

◆ This configuration can simulate a disaster at the VPLEX cluster to testRecoverPoint disaster recovery features at the remote site.

EMC VPLEX Product Guide92

Page 93: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Shared VPLEX splitterThe VPLEX splitter can be shared by multiple RecoverPoint clusters. This allows datato be replicated from a production VPLEX cluster to multiple RecoverPoint clusters.

Figure 36 Shared VPLEX splitter

Up to 4 RecoverPoint RPA clusters can share a VPLEX splitter.

Shared RecoverPoint RPA clusterThe RecoverPoint RPA cluster can be shared by multiple VPLEX sites:

Figure 37 Shared RecoverPoint RPA cluster

RecoverPoint replication with CLARiiONVPLEX and RecoverPoint can be deployed in conjunction with CLARiiON-basedRecoverPoint splitters, in both VPLEX Local and VPLEX Metro environments.

Redundancy with RecoverPoint 93

Page 94: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

In the configuration depicted in Figure 38, host writes to VPLEX Local virtualvolumes are written to both legs of RAID 1 devices. The VPLEX splitter “splits” thewrites, sending one copy to the usual back-end storage, and one copy across a WANto a CLARiiON array at a remote disaster recovery site:

Figure 38 Replication with VPLEX Local and CLARiiON

In the configuration depicted in Figure 39, host writes to VPLEX Metro distributedvirtual volumes are similarly split, sending one copy to each of the VPLEX clusters,and a third copy across a WAN to a CLARiiON array at a remote disaster recoverysite:

Site 1

VPLEX

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

coverPoint AAApplianncvveeerPoint AApplianliaRecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

overPoint AAppliaerPoint AApp li

DR Site

VPLX-000379

RecoverPoint WAN

RAID 1 Mirrored Device

Volume

Journaling

Volume

VolumeVolume

Journaling

Volume

WriteGroup

EMC VPLEX Product Guide94

Page 95: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

DR SiteSite 1

VPLEX

Site 2

VPLEX

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

RecoverPoint Appliance

overPoint AAppliaaanovveerPointt Appppllian verPoint ApppliaaveeerrrPoint Appppliian

VPLX-000378

RecoverPoint WAN

WAN - COM

Distributed Device

Volume

CLARiiON CLARiiON

Journaling

Volume

Journaling

Volume

Volume Volume

WriteGroup

Figure 39 Replication with VPLEX Metro and CLARiiON

Restoring VPLEX virtual volumes with RecoverPoint CRRRestoration of production volumes starts from the snapshot (PIT) selected by the user.From that point forward, the production source is restored from the replica. Data thatis newer than the selected point in time will be rolled back and rewritten from theversion in the replica.

The replica’s journal is preserved and remains valid.

Data mobility with RecoverPointVPLEX mobility between arrays does not impact RecoverPoint.

Mobility between VPLEX-hosted arrays does not require any changes or full sweepson the part of RecoverPoint.

vCenter Site Recovery Manager support for VPLEXWith RecoverPoint replication, you can add Site Recovery Manager support toVPLEX.

Redundancy with RecoverPoint 95

Page 96: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Figure 40 Support for Site Recovery Manager

When an outage occurs in VPLEX Local or VPLEX Metro configurations, the virtualmachine(s) can be restarted at the DR Site with automatic synchronization to theVPLEX configuration when the outage is over.

EMC VPLEX Product Guide96

Page 97: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Distributed data collaborationWith VPLEX, the same data can be accessible from either VPLEX cluster at all times— even if they are at different sites. The data is literally shared, not copied, so that achange made in one site are replicated at the other site. Current applications for thesharing of information over distance are not suitable for collaboration or for Big Dataenvironments. For example, transfer of hundreds of GB or event TB of data acrossWAN using FTP is extremely slow, results in duplication of data, and is extremelyinefficient if you need to modify a small portion of a huge data set or use it foranalysis. With VPLEX, the problem of unnecessarily copying the data is eliminated.Instead, VPLEX makes the data available in both locations, and because VPLEX issmart, it doesn’t need to ship the entire file back and forth like other solutions — itonly sends the information that is being accessed but is not available locally, thusgreatly reducing bandwidth costs and offering significant savings over othersolutions. Deploying VPLEX in conjunction with third-party WAN optimizationsolutions can deliver even greater benefits. And with VPLEX AccessAnywhere, thedata remains online, and available.

This is a huge benefit for customers who have always had to rely on shipping largelog files and data sets back and forth across sites, then wait for updates to be made inanother location before they could resume working on it again.

Figure 41 Data shared with global visibility

One of the ways in which distributed data collaboration could be implemented is inthe form of local consistency groups with global visibility. Local consistency groupswith global visibility allow the remote cluster to read and write to the consistencygroup. However, all reads and writes to the consistency group from the remote

Cluster - 1

Virtual Volume

Storage

w A

Cluster - 2

w A

w

A

VPLX-000372

1 6

2

5

43

Distributed data collaboration 97

Page 98: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

cluster must pass over the WAN-COM link to access the consistency group. Thisallows the remote cluster to have instant on-demand access to the consistency group,but also adds additional latency for the remote cluster. Local consistency groups withglobal visibility are supported in both VPLEX Metro and VPLEX Geo environments.However, the round-trip latency in both cases must be 5ms RTT latency or less. Onlylocal volumes can be placed into the local consistency group with global visibility.Local consistency groups with global visibility cannot be set to asynchronous cachemode. I/O that goes to local consistency groups with global visibility will always besynchronous.

For distributed data collaboration over greater distances, an asynchronousconsistency group could provide mirrored volumes at both locations. In aconfiguration where clusters are further apart, a higher network latency wouldsignificantly affect I/O to distributed volumes since host I/O must be acknowledgedat both clusters before being written to the back end. This can result in data loss in theevent of an engine, cluster, or link failure. To enable this configuration to allow morethan 5ms of latency between clusters, asynchronous I/O is used. Figure 42 showshow an asynchronous consistency group can support the distributed datacollaboration use case.

Figure 42 Asynchronous consistency group for distributed data collaboration

VPLEX Cluster-1

1 ms latency

Resume I/O

VPLX-000436

VMFS Datastore

Rule Set

Cluster-1 Wins

Volume Volume

Distributed Device

VPLEX Cluster-2

EMC VPLEX Product Guide98

Page 99: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Cross connect You can deploy a VPLEX Metro high availability Cross Connect when two sites arewithin campus distance of each other (up to 1ms round trip time latency) and thesites are running VMware HA and VMware Distributed Resource Scheduler (DRS).You can then deploy A VPLEX Metro distributed volume across the two sites using across connect front-end configuration and install a VPLEX Witness server in adifferent failure domain.

Note: Cross Connect is supported in VPLEX Metro deployments only.

Figure 43 shows a high level schematic of a VPLEX Metro high availability CrossConnect solution for VMware. This type of configuration brings the ability to relocatevirtual machines over distance which is extremely useful in disaster avoidance, loadbalancing, and cloud infrastructure use cases all relying on out-of-the-box featuresand functions. Additional value can be derived from deploying the VPLEX Metro HACross Connect solution to ensure total availability.

Figure 43 Metro HA Cross Connect solution for VMware

If a physical host failure occurs at either Site A or Site B the VMware high availabilitycluster restarts the affected virtual machines on the surviving ESX servers.

For more information on Metro HA Cross Connect, see “VPLEX Metro HA in acampus” on page 100.

APPOS

APPOS

APPOS

APPOS

Site A Site B

Heterogeneous Storage VPLEX Witness Heterogeneous Storage

VPLX-000391

STRETCHEDVSPHERE CLUSTER (DRS & HA)

Phy

sica

lVi

rtua

l

Phy

sica

lVi

rtua

l

ISL

ISL

Distributed data collaboration 99

Page 100: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

VPLEX Metro HA in a campusVPLEX Metro HA configurations consist of a VPLEX Metro system deployed inconjunction with VPLEX Witness. There are two types of Metro HA configurations; ageneric one that can be stretched up to 5 ms of latency between data centers, and oneusing Cross Connect between VPLEX clusters and hosts, which provides a higherlevel of availability but is constrained to distances with up to 1ms round trip timelatency. The key to these environments is AccessAnywhere. It allows both clusters toprovide simultaneous coherent read/write access to the same virtual volume. Thatmeans that on the remote site, the paths are up and the storage is available evenduring normal operation and not only after failover. When you combine this withhost failover clustering technologies such as VMware HA, this provides you withfully automatic application restart for any site-level disaster. The system ridesthrough component failures within a site, including the failure of an entire array.VPLEX Metro HA configurations:

◆ Ride through any single component failures within the storage subsystem

◆ Provide automatic restart in case of any failure in the environment

◆ Optionally, with DRS enabled, workload spikes can be distributed between datacenters alleviating the need to purchase more storage

◆ No requirement to stretch the Fiber Channel fabric between sites. You canmaintain fabric isolation between the two sites

VMware ESX can be deployed at both clusters in a Metro environment to create ahigh availability environment. VMware can be deployed with or without CrossConnect. Figure 44 on page 101 shows the Metro HA configuration without CrossConnect. Notice that the two clusters must have less than 5 ms of WAN-COM latency.

EMC VPLEX Product Guide100

Page 101: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Figure 44 VMware Metro HA without Cross Connect

In this scenario, a virtual machine can write to the same distributed device fromeither cluster. In other words, if the customer is using VMware Distributed ResourceScheduler (DRS), which allows the automatic load distribution on virtual machinesacross multiple ESX servers, a virtual machine can be moved from an ESX serverattached to Cluster-1 to an ESX server attached to Cluster-2 without losing access tothe underlying storage. This configuration allows virtual machines to move betweentwo geographically disparate locations with up to 5ms of latency.

If the Distributed Resource Scheduler moves a virtual machine to the Cluster-2 ESXserver, the virtual machine continues to write to its distributed device. VPLEX, in thisway, supports the mobility of virtual machines across geographic locations.

A data unavailability event can occur when there is not a full site outage, but there isa VPLEX outage on Cluster-1 and the virtual machine is currently running on the ESXserver attached to Cluster-1. If this configuration also contains a VPLEX Witness, theVPLEX Witness recognizes the outage and recommends that Cluster-2 resume I/Orather than following the rule set. The virtual machine running against storageexported at Cluster-1 does not fail, despite all paths to storage having gone away.Because it doesn't fail, VMware HA has no reason to try to restart it on a serverattached to Cluster-2. Cluster-2, in this case, is ready to process reads and writes fromthe virtual machine, but the virtual machine is hung at Cluster-1. Manualintervention is required at this point to move the virtual machine to the Cluster-2 ESXserver to provide access to the data.

VPLEX Cluster-1

5 ms latency

Resume I/O

VPLX-000386

VMFS Datastore

Rule Set

Cluster-1 Wins

Volume Volume

Distributed Device

VPLEX Cluster-2

VPLEX Metro HA in a campus 101

Page 102: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Best practices withMetro HA

Follow these best practices when configuring a Metro HA in a campus:

◆ Configure the front end with a stretched layer-2 network so that when a virtualmachine moves between sites, its IP address can stay the same.

◆ Use DRS host affinity rules if DRS is enabled. Host affinity rules keep virtualmachines running in the preferred site as long as the virtual machines can, andonly moves the virtual machines to the non-preferred site if they can not run inthe preferred site.

◆ Deploy the VPLEX Witness with this type of solution to avert some system-widedata unavailability events.

The VMware Metro HA cross-connect environment is very similar to the deploymentshown in Figure 44 on page 101. However, each ESX server is connected and zoned toboth VPLEX clusters. By cross-connecting ESX servers to VPLEX, the failure of oneVPLEX cluster does not result in data unavailability.

Note: The VMware Metro HA cross connect use case is supported at latencies of up to 1ms RTT.

In Figure 45 on page 103, the virtual machine is deployed at Cluster-1. If Cluster-1fails, the VPLEX Witness recommends that VPLEX ignore the rule set and useCluster-2. Immediately after Cluster-2 resumes I/O, the virtual machine can writethrough the remote VPLEX cluster without being migrated to Cluster-2.

EMC VPLEX Product Guide102

Page 103: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

Figure 45 VMware Metro HA with Cross-Connect

Again, it is very important that the VPLEX Witness be deployed in this type ofenvironment to help avert data unavailable events.

Refer to the EMC Simple Support Matrix, EMC VPLEX and GeoSynchrony, available athttp://elabnavigator.EMC.com under the Simple Support Matrix tab for the latest listof software and hardware versions supported by VPLEX with GeoSynchrony.

VPLEX Cluster-1

1 ms latency

VPLX-000385

VMFS Datastore

Volume Volume

Distributed Device

VPLEX Cluster-2

VPLEX Metro HA in a campus 103

Page 104: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VPLEX Use Cases

VPLEX MetroHAfailure handling

Figure 46 shows areas in which a typical VPLEX Metro configuration that could(though unlikely) fail.

Figure 46 VPLEX Metro HA failure handling

Table 6 describes how VPLEX Metro HA handles each failure shown in Figure 46.

VPLX-000396

VPLEXWitness

Table 6 How VPLEX Metro HA recovers from failure

Failure description Failure handling

Server failure in DataCenter 1

VMware HA software restarts the affected applications in data center 2automatically.

VPLEX cluster failure inData Center 2

VPLEX Witness detects the failure and enables all volumes on surviving cluster.

Inter site link failure If the cross-connect links leverage a different physical link from that used by theinter-cluster WAN Com, applications are unaffected. Every volume continues tobe made available in one data center or the other. However, if the cross-connectlinks leverage the same physical link as the inter-cluster WAN Com, applicationrestart is required.

Storage array failure Applications are unaffected. VPLEX dynamically redirects I/O to mirrored copyon surviving array.

Note: This example assumes that all distributed volumes are also mirrored onthe local cluster. If a distributed volume is not mirrored on the local cluster, thenthe application still remains available (because the data can be fetched/sentfrom/to the remote cluster). However, each read/write operation now incurs asmall performance cost.

Failure ofVPLEX Witness

After recognizing a loss of connectivity with the VPLEX Witness, both clusterscall home. As long as both clusters continue to operate and there is nointer-cluster link partition, applications are unaffected. If either cluster fails or ifthere is an inter-cluster link partition, the system is in jeopardy of dataunavailability. Therefore, it is recommended that if the VPLEX Witness outage isexpected to be long, the VPLEX Witness functionality should be disabled toprevent the possible data unavailability.

EMC VPLEX Product Guide104

Page 105: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

A

VS1 HardwareDescription

This appendix contains a description of the VS1 hardware options.

◆ VPLEX hardware overview ....................................................................................... 106

VS1 Hardware Description 105

Page 106: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

VPLEX hardware overviewThree implementations of EMC® VPLEX™ are available in the current release:

◆ VPLEX Local™ - single cluster

◆ VPLEX Metro™ - two clusters, separated by synchronous distances

◆ VPLEX Geo™ - two clusters, separated by asynchronous distances

A cluster is a single cabinet that contains one of the hardware configurations shown in“VS1 hardware configurations” or “VS2 hardware configurations.”

Note that in the current GeoSynchrony release, VS1 and VS2 hardware cannotco-exist in a VPLEX implementation, with one exception - both versions can co-existin a VPLEX Local cluster during a non disruptive hardware upgrade from VS1 toVS2.

VS1 hardwareconfigurations

Note: The component placement shown for single-engine and dual-engine configurationsallows for non disruptive upgrades to larger configurations.

Note: Each component labeled SPS in the figures is a standby power supply module.

Figure 47 VPLEX VS1 hardware example: Single-engine configuration

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS 1

Engine 1

Management server

Director 1ADirector 1B

VPLX-000215

EMC VPLEX Product Guide106

Page 107: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 48 VPLEX VS1 hardware example: Dual-engine configuration

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS 1

Engine 1

UPS A

Fibre Channel switch A

Fibre Channel switch B

Management server

Director 1ADirector 1B

SPS 2

Engine 2 Director 2ADirector 2B

UPS B

VPLX-000214

VPLEX hardware overview 107

Page 108: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 49 VPLEX VS1 hardware example: Quad-engine configuration

Figure 50 VPLEX VS1 Engine components

Note: The WAN COM ports on IOMs A4 and B4 are used if the inter cluster connections areover Fibre Channel, and the WAN COM ports on IOMs A5 and B5 are used if the inter clusterconnections are over IP.

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS 1

Engine 1

UPS A

Fibre Channel switch A

Fibre Channel switch B

Management server

Director 1ADirector 1B

SPS 2

Engine 2

SPS 3

Engine 3

SPS 4

Engine 4

Director 2ADirector 2B

Director 3ADirector 3B

Director 4ADirector 4B

UPS B

VPLX-000213

EMC VPLEX Product Guide108

Page 109: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

VS2 hardwareconfigurations

Note: The component placement shown for single-engine and dual-engine configurationsallows for non disruptive upgrades to larger configurations.

Note: Each component labeled SPS in the figures is a standby power supply module.

Figure 51 VPLEX VS2 hardware example: Single-engine cluster

VPLX-000226

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS 1B SPS 1A

Engine 1, Director AEngine 1, Director B

Management server

Laptop tray

VPLEX hardware overview 109

Page 110: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 52 VPLEX VS2 hardware example: Dual-engine cluster

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS 1B SPS 1A

Management server

Engine 1, Director AEngine 1, Director B

SPS 2B SPS 2A

Engine 2, Director AEngine 2, Director B

UPS A

Fibre Channel switch A

Fibre Channel switch B

UPS B

VPLX-000227

Laptop tray

EMC VPLEX Product Guide110

Page 111: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 53 VPLEX VS2 hardware example: Quad-engine cluster

Figure 54 shows the modules in a VPLEX VS2 engine. Note that IOMs A2 and B2 inthe figure each contain four Fibre Channel ports for VPLEX Metro or VPLEX GeoWAN connections. VPLEX Metro and VPLEX Geo also support IP WAN connections,in which case these IOMs each contain two Ethernet ports. In a VPLEX Localconfiguration, these IOMs contain no ports.

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS 1B SPS 1A

Management server

Engine 1, Director AEngine 1, Director B

SPS 2B SPS 2A

Engine 2, Director AEngine 2, Director B

SPS 3B SPS 3A

Engine 3, Director A

Laptop tray

Engine 3, Director B

SPS 4B SPS 4A

Engine 4, Director AEngine 4, Director B

UPS A

Fibre Channel switch A

Fibre Channel switch B

UPS B

VPLX-000228

VPLEX hardware overview 111

Page 112: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 54 VPLEX VS2 engine modules (front view)

EMC VPLEX Product Guide112

Page 113: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

IP addresses and component IDsThe IP addresses of the VPLEX hardware components are determined by a set offormulae that depend on the internal management network (A or B), the Cluster IPSeed, and (for directors) the Enclosure ID (which matches the engine number).

Figure 57 shows the IP addresses in a cluster with a Cluster IP Seed of 1, andFigure 58 on page 116 shows the addresses for a Cluster IP Seed of 2. Note that theCluster IP Seed is the same as the Cluster ID, which depends on the VPLEXimplementation:

◆ VPLEX Local - The Cluster ID is always 1.

◆ VPLEX Metro or VPLEX Geo - The Cluster ID for the first cluster that is set upis 1, and the second cluster is 2.

VPLEX VS1 hardware

Figure 55 Component IP addresses in Cluster 1

Management network A addresses

VPLX-000107

FC switch A

128.221.252.42128.221.252.41

128.221.252.40128.221.252.39

128.221.252.38128.221.252.37

128.221.252.36128.221.252.35

128.221.253.42128.221.253.41

128.221.253.40128.221.253.39

128.221.252.34

FC switch B 128.221.253.34

128.221.253.38128.221.253.37

128.221.253.36128.221.253.35

Management network B addresses

Cluster IP Seed = 1Enclosure IDs = engine numbers

Engine 4:Director 4BDirector 4A

Engine 3:Director 3BDirector 3A

Engine 2:Director 2BDirector 2A

Engine 1:Director 1BDirector 1A

Engine 4:Director 4BDirector 4A

Engine 3:Director 3BDirector 3A

Engine 2:Director 2BDirector 2A

Engine 1:Director 1BDirector 1A

Management server

Public Ethernet portCustomer-assigned

Service port128.221.252.2

Mgt B port128.221.253.33

Mgt A port128.221.252.33

IP addresses and component IDs 113

Page 114: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 56 Component IP addresses in VPLEX Metro or VPLEX Geo Cluster 2

VPLX-000108

FC switch A

128.221.252.74128.221.252.73

128.221.252.72128.221.252.71

128.221.252.70128.221.252.69

128.221.252.68128.221.252.67

128.221.253.74128.221.253.73

128.221.253.72128.221.253.71

128.221.252.66

FC switch B 128.221.253.66

128.221.253.70128.221.253.69

128.221.253.68128.221.253.67

Cluster IP Seed = 2Enclosure IDs = engine numbers

Engine 4:Director 4BDirector 4A

Engine 3:Director 3BDirector 3A

Engine 2:Director 2BDirector 2A

Engine 1:Director 1BDirector 1A

Engine 4:Director 4BDirector 4A

Engine 3:Director 3BDirector 3A

Engine 2:Director 2BDirector 2A

Engine 1:Director 1BDirector 1A

Management server

Public Ethernet portCustomer-assigned

Service port128.221.252.2

Mgt B port128.221.253.65

Mgt A port128.221.252.65

Management network A addressesManagement network B addresses

EMC VPLEX Product Guide114

Page 115: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

VPLEX VS2 hardware

Figure 57 Component IP addresses in Cluster 1

VPLX-000242

FC switch A 128.221.252.34

FC switch B 128.221.253.34

Cluster IP Seed = 1Enclosure IDs = engine numbers

Management server

Public Ethernet portCustomer-assigned

Service port128.221.252.2

Mgt B port128.221.253.33

Mgt A port128.221.252.33

128.221.252.42128.221.253.42

Engine 4:Director 4B, A side:Director 4B, B side:

128.221.252.41128.221.253.41

Engine 4:Director 4A, A side:Director 4A, B side:

128.221.252.40128.221.253.40

Engine 3:Director 3B, A side:Director 3B, B side:

128.221.252.39128.221.253.39

Engine 3:Director 3A, A side:Director 3A, B side:

128.221.252.38128.221.253.38

Engine 2:Director 2B, A side:Director 2B, B side:

128.221.252.37128.221.253.37

Engine 2:Director 2A, A side:Director 2A, B side:

128.221.252.36128.221.253.36

Engine 1:Director 1B, A side:Director 1B, B side:

128.221.252.35128.221.253.35

Engine 1:Director 1A, A side:Director 1A, B side:

IP addresses and component IDs 115

Page 116: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 58 Component IP addresses in VPLEX Metro or VPLEX Geo Cluster 2

VPLX-000243

FC switch A 128.221.252.66

FC switch B 128.221.253.66

Cluster IP Seed = 2Enclosure IDs = engine numbers

Management server

Public Ethernet portCustomer-assigned

Service port128.221.252.2

Mgt B port128.221.253.65

Mgt A port128.221.252.65

128.221.252.74128.221.253.74

Engine 4:Director 4B, A side:Director 4B, B side:

128.221.252.73128.221.253.73

Engine 4:Director 4A, A side:Director 4A, B side:

128.221.252.72128.221.253.72

Engine 3:Director 3B, A side:Director 3B, B side:

128.221.252.71128.221.253.71

Engine 3:Director 3A, A side:Director 3A, B side:

128.221.252.70128.221.253.70

Engine 2:Director 2B, A side:Director 2B, B side:

128.221.252.69128.221.253.69

Engine 2:Director 2A, A side:Director 2A, B side:

128.221.252.68128.221.253.68

Engine 1:Director 1B, A side:Director 1B, B side:

128.221.252.67128.221.253.67

Engine 1:Director 1A, A side:Director 1A, B side:

EMC VPLEX Product Guide116

Page 117: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Internal cablingThe figures in this section show the various cabling inside a VPLEX cabinet. Thefigures are included here for reference only, because all the cables shown in thissection except inter-cluster WAN COM cables (VPLEX Metro and VPLEX Geo only)are installed before the unit ships from EMC.

VPLEX VS1 cabling This section includes the following figures:

Cluster size Cable type Figure

Quad-engine Ethernet Figure 59 on page 118

Serial Figure 60 on page 119

Fibre Channel Figure 61 on page 120

AC power Figure 62 on page 121

Dual-engine Ethernet Figure 63 on page 122

Serial Figure 64 on page 123

Fibre Channel Figure 65 on page 124

AC power Figure 66 on page 125

Single-engine Ethernet Figure 67 on page 126

Serial Figure 68 on page 126

Fibre Channel Figure 69 on page 126

AC power Figure 70 on page 127

All cluster sizes,VPLEX Metro orVPLEX Geo only

Fibre Channel WAN COM Figure 71 on page 127

IP WAN COM Figure 72 on page 128

Internal cabling 117

Page 118: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

VPLEX VS1 quad-engine configuration

Figure 59 Ethernet cabling in a VPLEX VS1 quad-engine configuration

VPLX-000044

Engine 1

Fibre Channel switch A

Fibre Channel switch B

Management server

Engine 2

Engine 3

Engine 4

Pur

ple,

71

in.

Pur

ple,

71

in.

Pur

ple,

20

in.

Pur

ple,

37

in.

Pur

ple,

20

in.

Green, 20 in.

Green, 20 in.

Green, 48 in.

Green, 37 in.

Green, 37 in.

EMC VPLEX Product Guide118

Page 119: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 60 Serial cabling in a VPLEX VS1 quad-engine configuration

VPLX-000065

Engine1

UPS A

UPS B

Engine 2

Engine 3

Engine 4

12 in. 12 in.

12 in. 12 in.

40 in. 40 in.

12 in. 12 in.

12 in. 12 in.

Internal cabling 119

Page 120: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 61 Fibre Channel cabling in a VPLEX VS1 quad-engine configuration

Note: All 16 Fibre Channel cables are light blue. However, the cables connected to FibreChannel switch A have blue labels, and the cables connected to switch B have orange labels.

VPLX-000055

Engine 1

Fibre Channel switch A

Fibre Channel switch B

Engine 2

Engine 3

Engine 4

79 in.(all 16 cables)

EMC VPLEX Product Guide120

Page 121: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 62 AC power cabling in a VPLEX VS1 quad-engine configuration

VPLX-000211

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS 1

Engine 2

Engine 1

Engine 4

Engine 3

UPS A

Management server

SPS 2

SPS 3

SPS 4

UPS B

Fibre Channel switch B

Fibre Channel switch A

Internal cabling 121

Page 122: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

VPLEX VS1 dual-engine configuration

Figure 63 Ethernet cabling in a VPLEX VS1 dual-engine configuration

VPLX-000043

Engine 1

Fibre Channel switch A

Fibre Channel switch B

Management server

Engine 2

Pur

ple,

71

in.

Pur

ple,

71

in.

Pur

ple,

20

in.

Green, 20 in.

Green, 48 in.

Green, 37 in.

EMC VPLEX Product Guide122

Page 123: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 64 Serial cabling in a VPLEX VS1 dual-engine configuration

VPLX-000067

Engine1

UPS A

UPS B

Engine 2

40 in.

40 in.

12 in. 12 in.

12 in. 12 in.

Internal cabling 123

Page 124: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 65 Fibre Channel cabling in a VPLEX VS1 dual-engine configuration

Note: All 16 Fibre Channel cables are light blue. However, the cables connected to FibreChannel switch A have blue labels, and the cables connected to switch B have orange labels.

VPLX-000057

Engine 1

Fibre Channel switch A

Fibre Channel switch B

Engine 2

79 in. (all 16 cables)Eight cables for a quad-engine configuration are included for

ease of upgrading, and are tied to the cabinet sidewalls

EMC VPLEX Product Guide124

Page 125: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 66 AC power cabling in a VPLEX VS1 dual-engine configuration

VPLX-000042

I

IO

IO

I

IO

IO

SPS 1

Engine 1

Engine 2

UPS A

Fibre Channel switch A

Fibre Channel switch B

Management server

SPS 2

UPS B

I

Internal cabling 125

Page 126: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

VPLEX VS1 single-engine configuration

Figure 67 Ethernet cabling in a VPLEX VS1 single-engine configuration

Figure 68 Serial cabling in a VPLEX VS1 single-engine configuration

Figure 69 Fibre Channel cabling in a VPLEX VS1 single-engine configuration

Note: Both Fibre Channel cables are light blue. However, the “A” side cable has a blue label,and the “B” side cable has an orange label.

VPLX-000052

Engine 1

Management server

Pur

ple,

71

in. G

reen, 37 in.

VPLX-000069

Engine 1

12 in. 12 in.

VPLX-000063

Engine 1

39 in.(2 cables)

EMC VPLEX Product Guide126

Page 127: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 70 AC power cabling in a VPLEX VS1 single-engine configuration

WAN COM cabling on VPLEX Metro and VPLEX GeoFigure 71 and Figure 72 show the WAN COM connection options for VS1 hardware.

Figure 71 Fibre Channel WAN COM connections on VS1 VPLEX hardware

VPLX-000041

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

ONI

OFFO

SPS

Engine 1

Management server

VPLX-000317

Cluster 1 (same connections from each engine in cluster) Cluster 2 (same connections from each engine in cluster)

InterclusterCOM SANswitch 1A

InterclusterCOM SANswitch 2A

ISL 1

InterclusterCOM SANswitch 2B

ISL 2InterclusterCOM SANswitch 1B

A4-FC03B4-FC03A4-FC02

B4-FC02 A4-FC03B4-FC03A4-FC02

B4-FC02

NOTE: “ISL” isinter-switch link

Internal cabling 127

Page 128: VPLEX Product Guide · 2020-03-03 · EMC VPLEX Product Guide 9 Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC® from

VS1 Hardware Description

Figure 72 IP WAN COM connections on VS1 hardware

VPLX-000368

A5-GE01B5-GE01A5-GE00

B5-GE00

Cluster 1(same connections from each engine in cluster)

A5-GE01B5-GE01A5-GE00

B5-GE00

Cluster 2(same connections from each engine in cluster)

IP subnet A

IP subnet B

EMC VPLEX Product Guide128