HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined...

38
HPE 3PAR Storage Federation Data mobility between HPE 3PAR StoreServ and third-party storage Technical white paper

Transcript of HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined...

Page 1: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

HPE 3PAR Storage Federation Data mobility between HPE 3PAR StoreServ and third-party storage

Technical white paper

Page 2: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper

Contents Executive summary ................................................................................................................................................................................................................................................................................................................................ 4

Intended audience ............................................................................................................................................................................................................................................................................................................................ 4 Terminology and abbreviations ........................................................................................................................................................................................................................................................................................... 4

Concepts .......................................................................................................................................................................................................................................................................................................................................................... 5 Benefits of Storage Federation .................................................................................................................................................................................................................................................................................................... 9 Comparing Storage Federation with appliance-based storage migration tools ............................................................................................................................................................................. 9

Ease of installation............................................................................................................................................................................................................................................................................................................................ 9 Ease of management ..................................................................................................................................................................................................................................................................................................................... 9 Ease of licensing ................................................................................................................................................................................................................................................................................................................................. 9 Performance ....................................................................................................................................................................................................................................................................................................................................... 10 Retain data services .................................................................................................................................................................................................................................................................................................................... 10 Reduction of cost ........................................................................................................................................................................................................................................................................................................................... 10 Comparison summary ............................................................................................................................................................................................................................................................................................................... 10

Use cases ..................................................................................................................................................................................................................................................................................................................................................... 10 Simplicity of migration setup and operation ......................................................................................................................................................................................................................................................... 10 Manage multiple systems easily ...................................................................................................................................................................................................................................................................................... 10 Load balancing workloads..................................................................................................................................................................................................................................................................................................... 11 HPE 3PAR asset management made easy ............................................................................................................................................................................................................................................................ 11 Legacy HPE 3PAR and third-party asset refresh made easy............................................................................................................................................................................................................... 11 Zero footprint data mobility ................................................................................................................................................................................................................................................................................................. 11 Optimize system utilization .................................................................................................................................................................................................................................................................................................. 12 Accommodate unplanned workloads.......................................................................................................................................................................................................................................................................... 12 Enhanced scalability ................................................................................................................................................................................................................................................................................................................... 12 Federated Thin Provisioning .............................................................................................................................................................................................................................................................................................. 13 Ultimate elasticity for cloud service providers .................................................................................................................................................................................................................................................... 13 Empowering storage and server administrators ............................................................................................................................................................................................................................................... 13 Save disk space ............................................................................................................................................................................................................................................................................................................................... 13 Save administration time ........................................................................................................................................................................................................................................................................................................ 13 Save money......................................................................................................................................................................................................................................................................................................................................... 14

Features ........................................................................................................................................................................................................................................................................................................................................................ 14 Federation section in the SSMC main menu ......................................................................................................................................................................................................................................................... 14 Federation auto-detection .................................................................................................................................................................................................................................................................................................... 16 Federation filtering ....................................................................................................................................................................................................................................................................................................................... 16 Improved orchestration ........................................................................................................................................................................................................................................................................................................... 17 Managing federated systems ............................................................................................................................................................................................................................................................................................. 19 Unlimited data mobility ............................................................................................................................................................................................................................................................................................................ 19 Very large volumes ...................................................................................................................................................................................................................................................................................................................... 19

Page 3: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper

Migration resilience...................................................................................................................................................................................................................................................................................................................... 20 Configuration synchronization .......................................................................................................................................................................................................................................................................................... 20 Automatic host rescan .............................................................................................................................................................................................................................................................................................................. 20 Scheduled migrations ................................................................................................................................................................................................................................................................................................................ 20 Concurrent Peer Motion tasks ........................................................................................................................................................................................................................................................................................... 21 Migration throughput control ............................................................................................................................................................................................................................................................................................ 22 Peer links over 16 Gb/s Fibre Channel ...................................................................................................................................................................................................................................................................... 22 Changing the transport protocol for a Remote Copy target.................................................................................................................................................................................................................. 22 Simplified zoning ............................................................................................................................................................................................................................................................................................................................ 22 Data transfer optimization .................................................................................................................................................................................................................................................................................................... 22 Resilience during migration ................................................................................................................................................................................................................................................................................................. 23 Integration with HPE 3PAR Remote Copy............................................................................................................................................................................................................................................................. 23 LUN ID conflict resolution ..................................................................................................................................................................................................................................................................................................... 23 Support for migrating a VVset subset ........................................................................................................................................................................................................................................................................ 24 Support for migrating a subset of volumes on legacy 3PAR ................................................................................................................................................................................................................ 24 Autonomic object creation for third-party source systems .................................................................................................................................................................................................................... 24 Support for FIPS 140-2 mode of compliance for HPE 3PAR Online Import .......................................................................................................................................................................... 25

Zoning ............................................................................................................................................................................................................................................................................................................................................................ 25 Technical prerequisites .................................................................................................................................................................................................................................................................................................................. 27 Supported federation configurations ................................................................................................................................................................................................................................................................................. 28 Orchestrating data mobility ........................................................................................................................................................................................................................................................................................................ 29 Best practices .......................................................................................................................................................................................................................................................................................................................................... 30

Migration preparation................................................................................................................................................................................................................................................................................................................ 30 Federation zoning ......................................................................................................................................................................................................................................................................................................................... 31 Federation creation and management ...................................................................................................................................................................................................................................................................... 32 Host and volumes ......................................................................................................................................................................................................................................................................................................................... 32 Peer links and Peer volumes ............................................................................................................................................................................................................................................................................................... 33 Import task queueing ................................................................................................................................................................................................................................................................................................................. 33 Import task priority....................................................................................................................................................................................................................................................................................................................... 33 Managing Peer link throughput ....................................................................................................................................................................................................................................................................................... 34 Reporting .............................................................................................................................................................................................................................................................................................................................................. 34 Postmigration .................................................................................................................................................................................................................................................................................................................................... 34 Additional considerations ...................................................................................................................................................................................................................................................................................................... 35

Logging ......................................................................................................................................................................................................................................................................................................................................................... 35 Licensing ...................................................................................................................................................................................................................................................................................................................................................... 35 Delivery model ........................................................................................................................................................................................................................................................................................................................................ 36 Troubleshooting ................................................................................................................................................................................................................................................................................................................................... 36 Collecting information before contacting support ................................................................................................................................................................................................................................................. 36

Page 4: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 4

Executive summary Hewlett Packard Enterprise (HPE) 3PAR Storage Federation turns a collection of HPE 3PAR StoreServ systems into a single entity. This scale-out strategy maximizes resource utilization, balances workloads across arrays, and facilitates management of multiple storage systems. Storage Federation software expands the autonomic features of HPE 3PAR Dynamic Optimization for volume-level tiering and HPE 3PAR Adaptive Optimization for block-level tiering into a system-level tiering experience. Within a federation, single system boundaries are removed, enabling multiarray, bidirectional data mobility for enterprise block storage between federation members. Concurrent migrations of volumes between any federation members in any direction offer ultimate flexibility to store application data on the system with the physical disk and volume provisioning type that suits the current situation without business disruption or complex planning. A storage federation can be extended by HPE and third-party storage systems, which are called migration sources. Volumes on migration sources are imported into a federation member unidirectionally in a concurrent fashion. The federation framework is integrated in the 3PAR operating system: no external appliance in the datapath is required and no additional overhead is introduced on the host resources.

Federation was designed with the customer in mind: it is easy to implement, manage, and modify using the familiar HPE 3PAR StoreServ Management Console (SSMC), eliminating the need for expensive professional services at each federation layout change. Predefined workflows present in the SSMC empower experienced and unexperienced administrators to move one, some, or all volumes associated with a host from one 3PAR storage system to another with a single click. Simplified SAN zoning and the one-time, up-front zoning setup between federated systems and their hosts allow migrations between any federation members without SAN changes.

Storage Federation leverages HPE 3PAR Thin Built In technology to power the simple and rapid inline conversion of inefficient, fully provisioned volumes on source arrays to more efficient, higher-utilization thin, thin deduped, or compressed volumes on the destination 3PAR system (where supported).

This white paper concentrates on the concepts, features, and use cases of Storage Federation. Text, screenshots, and tables in this paper are compatible with SSMC 3.3 and all MU versions of HPE 3PAR OS 3.3.1 unless noted otherwise. Features introduced with HPE 3PAR OS 3.3.1 are covered only when they relate to Storage Federation.

Intended audience The intended audience for this white paper is storage administrators who manage 3PAR storage systems. Readers are assumed to have a good understanding of 3PAR systems and the SSMC. Expertise on legacy 3PAR and supported third-party systems is required when importing data from these systems into federation members.

Terminology and abbreviations This paper uses the following terminology and abbreviations.

Table 1. Definition of key terms

Term Description

Asymmetric Logical Unit Access (ALUA) A method for accessing a logical unit number (LUN) over the non-owning storage controller

Command line interface (CLI) A line-driven interface to 3PAR systems

Common provisioning group (CPG) A template to create 3PAR volumes

Destination system The storage system that receives the data that is migrated by HPE 3PAR Peer Motion

Fibre Channel A physical interconnect between a server and a storage system or between storage systems

Host The server with volumes under migration from a source to a destination system

Host bus adapter (HBA) The expansion card in a server that implements connectivity to the outside world

HPE 3PAR Online Import Utility (OIU) The console-based tool for migrating block-level storage from third-party storage systems to 3PAR systems

HPE 3PAR Peer Motion Utility (PMU) A command line tool for migrating block-level storage between 3PAR systems

HPE 3PAR Remote Copy The replication solution for 3PAR systems

Input/output operations per second (IOPS) The number of I/O operations executed in one second of time

Page 5: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 5

Term Description

Inquire (INQ) A SCSI command used to retrieve metadata of a target LUN

Logical Volume Manager (LVM) A method for managing space on mass storage devices that offers striping and RAID protection levels across multiple disk drives

Loop Initialization Protocol (LIP) A method to discover devices and their characteristics on a SCSI bus

MAINTENANCE_IN (MAINT_IN) A SCSI command for reporting information about a device

Management Console (MC) A near-obsolete graphical user interface for managing a 3PAR system

Minimally disruptive migration (MDM) One of the three migration types in HPE 3PAR Peer Motion and HPE 3PAR Online Import

Multipath input/output (MPIO) A framework designed to provide and manage multiple datapaths between storage devices and an operating system

Network Time Protocol (NTP) A framework for clock synchronization between computer systems

Peer link The logical interconnection between the source and destination systems under a Peer Motion or Online Import migration

Persistent Group Reservation Out (PROUT) A SCSI command that sends information from a target disk back to the initiator host

Rotations per minute (RPM) The revolution speed of the spinning platters of a disk drive

Single-initiator, single-target (SIST) A SAN zoning concept where the zone connects a single Fibre Channel port on a host with a single Fibre Channel port on a storage system

Small form-factor pluggable (SFP) An interface converting laser light into electric pulses and the reverse, used in Fibre Channel environments

Source system The storage system that contains the data to be migrated

StoreServ Management Console (SSMC) The graphical user interface for managing HPE 3PAR StoreServ systems

Storage area network (SAN) A high-speed special-purpose network that interconnects different kinds of storage devices with servers executing applications

Target Driven Peer Zoning (TDPZ) A SAN zoning technique where a storage array interfaces with SAN switches to create and maintain zones

Test Unit Ready (TUR) A SCSI command used to determine if a device is ready for read and write operations

Virtual LUN (VLUN) A virtual volume that is exported to a host

World Wide Name (WWN) A 64-bit quantity that uniquely identifies a Fibre Channel host bus adapter interface

Zoning operation A process that involves creating, modifying, or deleting a logical connection between two Fibre Channel host bus adapters

Concepts An HPE 3PAR Storage Federation is a software-defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, bidirectional data mobility purposes. 3PAR systems in a federation are called federated systems or member systems. A federation can be extended by attaching current and legacy 3PAR storage systems or selected third-party storage systems to one or more federation members. These migration sources can be attached to one or more federation members for unidirectional migration. Consult the Supported federation configurations section in this paper for details on the anatomy of supported federation layouts and their migration sources. The federation grouping and migration sources are supported with SSMC 2.2 and later. SSMC is a simple, concise, and easy-to-use graphical console that manages 3PAR systems. SSMC orchestrates one or more HPE 3PAR storage federations. It includes a graphical map of the federations it manages and their components. Figure 1 from SSMC 3.3 outlines the high-level topology of the Brussels storage federation with two member systems.

Page 6: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 6

Figure 1. Topology of the Brussels storage federation with two member systems

Figure 2 shows a single member federation with an HPE 3PAR StoreServ 7400 system and a legacy HPE 3PAR T-class system attached as migration sources.

Figure 2. Layout of the Antwerp storage federation with a single member and two migration sources

Figure 3 shows the SSMC mapping of the Antwerp federation components. The map shows the federation at the top level. With a few mouse clicks, users can explore the federation map to the detailed levels of systems, host sets, hosts, and volumes. Clicking any element in the tree shows relationships to other elements in the map. In Figure 3, the federation member named split and the 3PAR migration source named 3par-7400 reside on the same level. Although the legacy T400 system is a migration source as well, it is not displayed in the map. This is because it cannot be managed or inventoried by SSMC. The SSMC provides a quick overview of the layout of a federation and its attached migration sources, as shown in Figures 1, 2, and 3.

Page 7: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 7

Figure 3. Map of the components of the Antwerp storage federation

A 3PAR system can be a member of only one federation. HPE 3PAR SSMC 3.3 allows users to add a federated system to another federation as a unidirectional migration source. This enables data mobility across federations. Figure 4 shows an example of this concept where the system named aperol, belonging to federation Antwerp, was added as a migration source to the system named melba in federation Brussels. Connectivity to multiple members in different federations is supported.

Figure 4. Layout of the expanded storage federation Brussels

The logical interconnection between federation members and their attached migration sources is informal: adding a system to or removing it from a storage federation is live and nondisruptive for the system itself and for systems already in the federation or attached to it. Storage Federation supports mobility from and to any type of disk drive or provisioning including deduplication and compression, where available.

The underlying technology for data mobility between HPE 3PAR Storage Federation members in the SSMC is an enhanced version of HPE 3PAR Peer Motion software. In this version, one pair of Peer links operates in each direction between any two systems that are federation members. This way, a federation member acts simultaneously as a source and a destination system allowing concurrent, bidirectional, any-to-any member volume migration.

Page 8: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 8

Data mobility between storage federation members requires no zone changes when reversing the direction of data migration or when changing the source or destination in a next migration transaction. This is thanks to the “wire once, move anywhere” concept in which the required SAN zoning for data mobility between federation members is defined and activated up front or at creation time of the federation. This removes the need for SAN administrators to get involved for subsequent migration operations. As a result, application data migration becomes a routine activity that is easy to perform. The migration is executable at the day and time the storage and application administrators choose, not when the SAN administrators can make the requested zoning changes. This SAN zoning can stay in place indefinitely to move volumes any time during the life cycle of the systems. HPE Smart SAN for 3PAR is available to create and enable the zoning required for a system to join a federation as a member and as a migration source. Hosts should be zoned to all federation members to enable movement of any volume to any member.

A volume can only be migrated to one destination. Within one migration definition, different volumes can be migrated to different destinations. Multiple migrations can be initiated concurrently between any federated systems and migration systems in the same or opposite directions. The data of up to nine volumes, possibly coming from more than one source system, is transferred in parallel to a destination system. The import task for a volume beyond these first nine enters a queue on the destination system, meaning the task becomes active when a running task completes. Every destination system has its own queue.

Note A migration definition is the list of all attributes for a Peer Motion and Online Import migration such as volumes, destination system name and common provisioning group (CPG), priority, and so on. These attributes are selected in the SSMC, as presented in Figure 9.

SSMC provides the option to keep the migrated volumes on the source system after a successful migration; the default is to remove them. Some restrictions to this removal apply; consult the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide for more information.

Storage Federation supports online, minimally disruptive and offline data mobility between its members and attached migration sources. Minimally disruptive migration (MDM) is mainly for Windows® clusters larger than four members. In MDM, Asymmetric Logical Unit Access (ALUA) hosts undergo a single, short outage before the data import occurs to ensure that SCSI reservations by Windows cluster software are migrated properly. For non-ALUA hosts, an unzoning operation of the host from the source 3PAR is required. In an offline migration, the volumes moved remain unpresented from the host for the duration of the migration. You can select the following objects in the SSMC for migration between storage federation members:

• A single host

• A single host set

• A single, some, or all volumes presented to a host or host set

• A single virtual volume set (VVset)

• A Remote Copy group

Selecting a host or a host set includes all volumes presented to it in the migration definition. When you select a Remote Copy group for migration, all volumes of the group are included in the data transfer. If a selected volume is part of a Remote Copy group, you must migrate the entire group it is in. To migrate a cluster, select the cluster members’ host set. All shared and nonshared volumes exported to the cluster members are selected implicitly and included in the migration. Make sure to create the host set on the 3PAR destination before starting the Online Import operation. To migrate a subset of the shared volumes, you can select a VVset or individual volume in the cluster. If no host set is in use, select the VVset the shared volumes are in. Nested host sets are not supported. VVsets on the source array can be migrated one at a time; the VVset is recreated on the destination HPE 3PAR system. An optional consistency group expedites rollback in case the migration does not complete. If the host operating system supports ALUA and has an ALUA-capable persona on both the source and destination 3PAR, volumes can be selected one by one for migration.

Page 9: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 9

With multiple, concurrent data migrations possible between storage federation members and their migration sources, the need for a priority scheme became apparent: some migrations are more critical than others and should take precedence even though they were pushed onto the import task queue later than others. Peer Motion in the SSMC implements a three-level priority scheme for every migration task created. The default import task priority for a volume is medium; high and low are the other options. The queue with the import tasks scheduled for migrating is continuously rearranged to expedite higher-priority import tasks that came in later. When active, an import task does not get pre-empted by a higher priority one. The Best practices section in this white paper adds more details about implementing prioritization of Peer Motion import tasks.

Benefits of Storage Federation The data migration technology behind Storage Federation is the next-generation Peer Motion software that permits two independent Peer link pairs to connect two 3PAR systems in a federation. SSMC uses this enhanced capability to execute concurrent bidirectional migrations between members of the same federation with one mouse click.

All SAN zoning used to interconnect systems in a federation is created and enabled before or when the federation is established. As a result, no zoning changes are needed for migrations between federated systems and their migration sources. This empowers less-experienced HPE 3PAR administrators or application managers to execute migrations. The required SAN zones can be created manually using multiple-initiator, multiple-target zones or by using the Smart SAN functionality embedded in SSMC.

SSMC can easily synchronize Lightweight Directory Access Protocol (LDAP), SNMP, Network Time Protocol (NTP), syslog settings, users, domains, hosts, and so forth across federation members and from migration sources to the federation. Without automation, this activity can be time-consuming and error prone. Together with the easy setup and management of a federation and the effortless data mobility, this is a strong incentive for building storage federations in the data center.

Comparing Storage Federation with appliance-based storage migration tools The multiarray bidirectional data mobility of Storage Federation offers unique advantages compared to appliance-based storage migration tools from other vendors.

Ease of installation The Storage Federation software subsystem is built into HPE 3PAR OS 3.2.2 and later; no postinstallation is needed. A valid Peer Motion license on every system intended to enter the federation is all that is necessary. Installing and configuring an external migration appliance potentially complemented by virtualization of the storage systems behind it involves expensive professional services because of its convoluted composition. Storage Federation offers an appliance-less data mobility concept that saves system administrators weeks per year in setup and maintenance time.

Ease of management SSMC nondisruptively adds or removes a 3PAR system to or from a federation to other federation members. The same applies to migration sources. In contrast, data migration appliances add extra layers of management and complexity. Adding or removing a storage system from a virtualized storage environment is complicated because of the tight coupling between the arrays and the appliance executing the virtualization. Users can initiate Storage Federation migrations in one step using the SSMC interface, which they already use for volume and host management.

Appliances have a user interface that is different from array management tools, incurring a time-consuming learning curve for storage administration staff. In this context, executing migrations using an appliance is not a routine affair; it is typically reserved for professional services staff only.

Ease of licensing The Peer Motion license for new HPE 3PAR StoreServ 8000, 9000, and 20000 systems is bundled in the All-inclusive Multi-System Software license. This license is frame-based and valid for the lifetime of the array. Peer Motion for HPE 3PAR StoreServ 7000 and 10000 systems is bundled in the HPE 3PAR Data Optimization Software Suite v2. There is no licensing per TiB moved with Storage Federation, unlike with some appliance-based migration tools. If not done at the factory, installing the license key is all it takes to enable Peer Motion on a 3PAR system.

Page 10: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 10

Performance Migration in a storage federation is point-to-point between the member systems, resulting in the shortest path possible between the participating storage systems. This keeps latency low compared to an external appliance that operates inside the data path.

Retain data services Appliances and storage system virtualization might render certain advanced or unique functions on the array inaccessible to hosts. When a 3PAR system is a member of a storage federation, it retains all the functions of snapshots, provisioning, replication, tiering, HPE 3PAR Priority Optimization (Quality of Service), and more. In fact, by joining a federation, a system’s data services are expanded by the N:M concurrent, bidirectional migration option.

Reduction of cost Appliance-based solutions use specialized hardware devices, sometimes mixed with industry-standard servers. Because of the low-volume sales of appliances, a substantial acquisition cost for hardware, software, and licenses is not uncommon. Not always considered during the planning stage is the considerable expense for floor space, power, and cooling that the appliance tree imposes in the data center. Some appliances are closed, mandating a vendor’s intervention to make modifications, which decreases agility and raises cost of ownership.

Comparison summary Storage Federation requires no installation, is managed from the SSMC, sports native array-to-array Fibre Channel wire speed performance, and offers unrestricted access to all array features such as hardware-assisted HPE Thin Provisioning and space reclamation, consistency groups, Quality of Service, unique snapshot capabilities, and replication protection during the migration. This demonstrates that Storage Federation is clearly superior in features, functionality, and cost compared to migration appliances.

Use cases Storage Federation is a versatile product with applications beyond the mere migration of data between storage systems. This section describes several use cases that deliver novel and unique Storage Federation based services to customers.

Simplicity of migration setup and operation A storage federation is created and managed from the SSMC. The storage administrator moves volumes bidirectionally across federation members. The SAN zoning layout is set up once at creation of the federation. No zoning changes are required for N:M bidirectional data mobility between any federation members. SSMC conveniently lists the World Wide Names (WWNs) of all Peer and host ports that need to enter the federation zones.

Smart SAN reduces the SAN zoning effort typically required to add a system to a federation or remove it by using the SSMC. This dramatically decreases time and errors for SAN zoning work.

Manage multiple systems easily For security and fault isolation reasons, some data centers install multiple storage systems for the same application such as Microsoft® SharePoint, a web server, or a database. Keeping identical configuration settings of multiple storage systems serving the same application is a manual job that requires strict bookkeeping. In an environment such as storage federation where systems are interlinked for data mobility, having identical configuration data across all federation members is vital for unrestricted volume movement. SSMC copies the following items across federation member systems:

• User accounts

• User name and password

• Hosts and host sets

• Domain names with member users and member hosts

• Domain sets and parameters for LDAP, SNMP, and NTP

• Syslog servers

Page 11: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 11

SSMC 3.3 introduces a checkbox to create a new host on all federation members simultaneously. This reduces the number of steps and potential data entry errors for this type of operation. With SSMC as the central management console, keeping identical configuration data across multiple systems for federated data mobility has never been easier.

Load balancing workloads Commercial workloads can consume a varying amount of storage system resources depending on the time of the day (day or night) and time of year (end-of-the-quarter activities, holiday season sales). Resource consumption can also change over the application’s lifetime (increase in users or the database size). It is common practice to size a storage system for an application’s peak resource consumption, possibly exhibited only once a year. This is an inefficient use of financial and other resources.

Storage Federation offers multiarray, bidirectional data mobility without SAN zoning changes or downtime. Moving an application to a more powerful platform once or even a few times a year becomes feasible without extensive forward planning. Applications on a local system can be easily moved to another system temporarily to free up resources for an application that just became busier. Load balancing applications across federation members is executed efficiently with one click in SSMC, resulting in reduced capital layout and improved system utilization.

HPE 3PAR asset management made easy Customers typically retire storage equipment on three- or four-year cycles, so each year the data on one-quarter to one-third of the storage devices is moved out because of system retirement. At large enterprises, several projects for asset renewal run in parallel every day. Even when staying with the same vendor, moving data to a new storage system can be challenging. This process might require professional services if the new unit is very different from the decommissioned one. All 3PAR systems (entry level, midrange, and high end) share the same architecture, concepts, and user interface. This commonality combined with storage federation enables you to replace retiring 3PAR arrays seamlessly. The new 3PAR is inserted into the storage federation in a live and nondisruptive manner that does not affect the existing federation members. Importing the federation’s configuration data (on hosts, domains, user accounts, LDAP, NTP, and so forth) in SSMC can be performed with a few clicks, expediting the asset preparation of the new federation member. Migrating data from the retiring system to the new member is nondisruptive for applications and does not add overhead to the hosts. Removing the decommissioned system from the federation is nondisruptive as well. Overall, access to the data is uninterrupted during the entire operation.

Legacy HPE 3PAR and third-party asset refresh made easy Compared to other vendors, 3PAR storage systems have decisive advantages. These include default wide striping, a single OS and unified user interface across the entire portfolio from entry-level to high-end and all-flash arrays, unique scalability, industry-leading Thin Provisioning with hardware-assisted space reclamation, a broad software suite, and superior replication options. They also have a unified user interface across the entire portfolio from entry-level to high-end and all-flash arrays. SSMC supports the migration of legacy F- and T-class 3PAR systems to modern ones. It also supports the IBM XIV storage system as a migration source to a federated 3PAR storage array. You can use the graphical interface to relocate data on the IBM XIV to any current 3PAR system in an online, minimally disruptive, or offline fashion. No external appliance to manage the migration is introduced.

No migration process overhead is added to the hosts because the physical migration path is array-to-array, a concept that reduces complexity as well. Additional advantages of moving to 3PAR storage include the simple and intuitive steps for performing a migration in the SSMC, and the presence of a perpetual license for Online Import in the All-inclusive Single System Software licensing scheme for HPE 3PAR 8000, 9000, and 20000 systems.

Zero footprint data mobility 3PAR competitors offer data migration solutions based on an external appliance that fills almost a full-height rack of floor space, includes multiple industry-standard servers complemented by custom-built hardware devices, and consumes many Fibre Channel and network ports in the data center. The Storage Federation subsystem, supported with SSMC 2.2 and later, is built into HPE 3PAR OS 3.2.2 and later; no postinstallation or external appliance is necessary and no client software must be installed on the migrating hosts. This saves space, power, and cooling, which are of increasing importance in the data center today. Storage Federation is the only migration technology that delivers true zero footprint data mobility.

Page 12: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 12

Optimize system utilization Workloads can be classified by the resources they consume on the storage system. Example workloads are:

• Random read-intensive (web servers)

• Sequential read-intensive (video streaming)

• Mixed random read/write intensive (OLTP operations to databases)

• Sequential write-intensive (filling transaction logs, disk-to-disk backup, or video recording)

Sequential workloads fit nearline (NL) drives; random read workloads excel on solid-state drives (SSDs). Random write workloads perform well on SAS and solid-state drives. Dynamic tiering between two or three disk types can improve performance of some workloads. Choosing the right mix of drive types is a challenging task when multiple different workloads execute on the storage system. Moreover, although the types and number of disk drives might match the initial set of workloads and their intended performance, this might change after 6 to 12 months.

Setting up a storage federation of multiple 3PAR systems with a mix of drive types across its members solves this type of issue. For example, consider an application that was brought into service one year ago on a system with Fast Class drives only. HPE 3PAR System Reporter Region I/O Density reports showed the application benefited from a two-tiered SSD and Fast Class drive setup administered by Adaptive Optimization. Instead of acquiring SSDs for the system, the application was moved within hours to another federation member that had the two disk tiers installed.

Applications executing in a storage federation framework are placed with minimal admin effort where they best utilize a federation member’s resources regarding controller nodes (node count, CPU power, cache size, and ASIC generation), disk type (SSD, Fast Class, and NL), and I/O bandwidth (number and speed of Fibre Channel interfaces). If the resource consumption or service-level objective of the application changes over time, moving the application to a more appropriate federation member is a routine operation that does not require downtime or an intervention from the SAN zoning administration team. By configuring Storage Federation, the storage administrator gains seamless data mobility on a routine basis.

Accommodate unplanned workloads Data center planners continually anticipate uncertainty in their environments. Planning application data placement and sizing systems has become more of an art than a calculated strategy, given the ever-growing number of applications to be supported and their ever-increasing consumption of disk space, input/output operations per second (IOPS), and throughput. Storage Federation relaxes this organizational headache. A federation can consist of different 3PAR storage models, so any current model can enter a federation with any other type of system. This permits a customer to start with the 3PAR model of choice and add a different model to the federation at a later stage. For example, a federation can comprise an entry-level 3PAR 8200 system and an all-flash two-node 3PAR 9450 model. The federation can expand over time by adding a four-node 3PAR 9450 system and a high-end eight-node 3PAR 20840 R2 system containing a mix of spinning and solid-state drives. This federation can handle nearly every conceivable workload in an efficient way on one of its members. Workload mobility to a federation member with NL drives when project data goes into archive mode or to SSDs when an application scales up is seamless and requires no coordination with the application owners or the network and SAN administrators.

Enhanced scalability The reliability of disk arrays has grown to very high levels thanks to the fault-tolerant features of RAID and power distribution. Application data protection is further enhanced by efficient and reliable methods for volume replication to one or two secondary storage systems. Therefore, it is best practice to consolidate multiple applications serving tens to hundreds of hosts on a single storage system with replication enabled. 3PAR systems exhibit the required scalability for this consolidation exercise: the number of controller nodes can be upgraded from two to four in the midrange models and from two to eight in the high-end ones. CPU power, cache amount, host I/O bandwidth, and data replication capacity scale appropriately with the number of controller nodes. Storage Federation extends this massive scalability by presenting a collection of storage arrays as a single entity, which the customer can manage through a unified interface. This means managing multiple arrays is just as easy as managing a single array. The compute, store, and I/O resources across all federation members are available to applications. This way, a storage federation offers painless scalability to a massive capacity of 96 PiB and 15.2 million IOPS with bidirectional data mobility between its members. This level of scaling is many times higher than any competitive system.

Page 13: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 13

Federated Thin Provisioning Thin Provisioning has increased efficiency across the storage industry, driving up resource utilization and cutting capacity purchases, power and cooling costs, and administration time. However, Thin Provisioning remains managed at the level of a single storage array. Most customers happily overprovision the installed capacity of an array, but not its maximum installable capacity. The rationale behind this is that extra capacity can be added as needed to return to a safe buffer, but not more than the array can hold. With Storage Federation, this equation changes. With Storage Federation, you can quickly and easily migrate overprovisioned volumes to another array in the federation that seats enough capacity to accommodate the current and near-future growth rate of the volume. This means that the thin limit is managed on the federation level, allowing you to provision a single array well beyond its maximum installable capacity. This increases utilization to much higher rates on storage federation members while providing the benefits and savings associated with Thin Provisioning.

Ultimate elasticity for cloud service providers Cloud service providers offer a wide variety of server, application, and storage plans to potential customers. Providers always look for innovative data center deployment models to unlock new business opportunities while reducing cost. Storage Federation enables them to grow storage seamlessly to 96 TB of tiered capacity with the elastic mobility of data to a higher- or lower-level plan type. This way, the service provider can offer superior flexibility to customers so they can change storage plans swiftly and frequently—an advantage that might generate new business in an agile market. At the same time, providers can save days of work for storage administrators on larger migrations.

Empowering storage and server administrators Today’s trend in data center management is the fusion of task profiles of the server, networking, application, and storage administrator. For example, VMware® administrators can use the HPE Insight Control Storage Module or HPE OneView for VMware vCenter to create new volumes and datastores on a 3PAR system, assign them to new virtual machines, and bring those virtual machines into an HPE 3PAR Peer Persistence relationship. These tasks can be performed from within the familiar vCenter interface. This allows VMware administrators to become self-sufficient when executing tasks on the storage side. Removing the need to consult and coordinate with local or remote SAN and storage administration teams simplifies and accelerates the daily routine of VMware administrators.

Storage Federation takes this independence a step further. When the data mobility direction between two federation members is reversed or when the source or destination in a subsequent data migration operation is changed, no SAN zoning changes are required (precluding the need to contact the SAN administration team for zoning modifications. This technology empowers server, networking, and application administrators who have some 3PAR experience to decide when to execute migrations without involving storage or SAN team members. Data mobility becomes a routine activity, executable by both experienced and unexperienced 3PAR administrators. This provides the foundation for agile data management.

Save disk space Modern disk arrays support deduplication and compression. Deduplication is a technique that eliminates the need to store duplicate copies of repeating incoming data, but ensures that data remains correct and complete. Lossless data compression re-encodes a file’s data to require less storage space than the original file. Both of these techniques make it possible to store more data in less space.

HPE 3PAR OS 3.3.1 supports inline deduplication and compression on SSDs. Peer Motion in SSMC 3.3 supports migrating volumes from legacy 3PAR systems to deduped and compressed volumes on 3PAR arrays with SSDs. This reduces the physical disk space needed on the federated destination system. This saves capital costs, driving down the price of solid-state storage to the spinning disk level while increasing performance and reducing power consumption of the disk array.

Save administration time With more petabytes than ever to manage, storage administrators welcome any technology that reduces time spent on daily tasks and larger projects such as migrations for asset refreshes. Storage Federation saves time for storage administrators in the following ways:

• Asset refresh of federation members is orchestrated from the SSMC in a few steps. Administrators are not required to master CLI commands, install software on the hosts under migration, or install an appliance.

• The SAN zoning for storage federation is set up when the federation is created or when a member is added or removed from the federation. No zoning changes are required for N:M data mobility between any federation members. This means that, from a storage array and SAN zoning perspective, the time required to prepare a data migration between federation members is significantly reduced.

Page 14: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 14

• Preset SAN zoning eliminates the need to coordinate with the SAN support team to determine a time window for zone changes. This eliminates unnecessary communication and frees up time for the migration and SAN teams.

• Data mobility between federation members is nondisruptive and transparent for applications. This eliminates the need to negotiate with the application owners regarding when to transition to another storage system.

• SSMC handles synchronization of the configuration parameters between federation members and between a migration source and federation members—a chore that is time-consuming and error prone.

These features save hours or even days every month for storage administrators.

Save money With the ever-expanding amount of data that must be kept online, the cost of storage increases. Advances in Thin Provisioning, compression, and deduplication have optimized the consumption of physical disk space. On a data center level, storage systems are frequently overdimensioned when purchased. These systems might never approach their installed capacity, whereas other systems need a capacity upgrade using disk drives that other systems have in excess. The obvious suggestion to relocate disks physically is not always an option because of incompatible architectures or because of an application’s complexity. Migrating the application to a more suitable storage system can be such a formidable task that the application owners decide to purchase new disk hardware rather than undertake a risky and complex application migration maneuver. Storage Federation provides savings in this environment. The following points demonstrate how Storage Federation can reduce capital expenses:

• Storage systems serving applications with a high variation in seasonal storage resource consumption can be sized for their steady state load. An application can be moved temporarily to a more powerful system during the peak season. Capital expenditure decreases because storage systems do not need to be sized for their peak resource consumption. The temporary migration and eventual return to the original system becomes a routine operation for storage system administrators.

• An expensive professional services team from the storage vendor is not required. Moderately skilled storage administrators can migrate data between federated systems.

• Storage Federation offers data mobility without an external vendor appliance. This saves space, power, and cooling costs in the data center. It also reduces time spent on security investigation and administration, and prevents application downtime when bringing new devices into the data path.

• Moving infrequently used data nondisruptively to NL drives on a storage federation system decreases data retention costs.

• Unused storage capacity on overdimensioned systems can be used by relocating applications from overloaded or overprovisioned systems.

• New systems can easily replace end-of-support or end-of-lease systems without complex planning. This results in substantial savings by eliminating extended support and lease costs from delayed migrations.

Features This section describes the features of SSMC 3.3 and HPE 3PAR OS 3.3.1 that affect Storage Federation.

Federation section in the SSMC main menu SSMC 3.2 introduced a Federations section in the main menu, as highlighted in Figure 5. The Federation Configurations and Peer Motions options can be accessed directly from the main menu.

Page 15: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 15

Figure 5. Federation menu in SSMC 3.3

Figure 6 shows the Peer Motions screen with ongoing, completed, and aborted migrations on the left side and details on the right side, which improves use of screen space. This screen layout is similar to the layout for other objects such as volumes and host. This way, users can select multiple Peer Motion entries and filter for views of Peer Motion migrations by virtual volume, volume set, and activity. Users can also customize the columns by clicking the dark triangle at the top of the Peer Motion migration list.

Figure 6. Peer Motions screen in SSMC 3.3

To remove a record listed on the Peer Motions screen, from the Actions menu click Delete. With SSMC 3.2 and earlier, this removal is not successful if the federation in which the migration was executed did not exist anymore in the SSMC. With SSMC 3.3, you can delete a migration even if its federation was removed.

Page 16: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 16

Federation auto-detection SSMC 3.0 and later supports auto-detection of an existing federation. An example is if you created a federation in an SSMC instance and then set up a secondary instance of it. After adding the same 3PAR systems to the new instance, the federation and its migration sources are auto-detected and displayed in the SSMC based on state information residing on the federation member systems. The federation state for a migration source is stored on the federation member it is connected to. The secondary SSMC instance uploads the previous and ongoing Peer Motion activities from each of the federation members in the Activity log.

Federation filtering SSMC 3.3 offers a new filter for federations. To access the filter, click All systems on the top right of the Federation Configurations screen and select By Federation. This limits the scope of the SSMC to systems that belong to a particular federation. You can select multiple federations. Thanks to this, the size of the data table shown in the SSMC is reduced when displaying objects with a large number of items such as volumes, hosts, disks, ports, or systems. Figure 7 shows an example of a filter that is selecting the systems of federation Brussels.

Figure 7. Filter to select systems belonging to federation Brussels

Alternatively, you can use the search field near the top of the screen to filter for federations. Enter part of or the complete name of the federation and press Enter to start the search (see Figure 8). Note that filtering is case-insensitive.

If member systems in a federation are excluded from the view, the federation is still displayed on-screen but the excluded systems are no longer underlined. In a two-system federation, Peer Motion to a missing system in the filter produces an error stating that the system cannot be selected as a destination. In a three- or four-system federation, you cannot select the excluded system as a destination.

Page 17: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 17

Figure 8. Filter menu for federations

Improved orchestration The Peer Motions screen allows you to toggle from basic to advanced mode by selecting the Advanced options checkbox (see the top right section of Figure 9. Most users can operate in basic mode, which is the default. The Advanced options menu offers three extra choices if the destination HPE 3PAR OS is earlier than 3.3.1 and four extra choices with 3.3.1 and later. These options are shown in the red rectangle in Figure 9. Table 2 explains each of the four options. Note that only HPE 3PAR OS 3.3.1 and later display the Start at setting.

Table 2. Advanced options

Fields Description

Delete source virtual volumes This option is set to Yes by default. When this field is set to No, the migrated volumes are kept on the source system after a successful migration. Note that a migrated source volume becomes stale immediately after the migration completes.

MDM This option is set to No by default. If Peer Motion requires a host migration with the MDM method, this option becomes available and can be changed to Yes. This allows the user to shut down the host, make SAN zoning changes and configure multipath input/output (MPIO), reboot, and then click Continue/Resume on the Peer Motions page.

Pause Peer Motion before starting data movement

This option is set to No by default. When this field is changed to Yes, online Peer Motion is suspended before importing virtual volumes, even if a host rescan or MPIO reconfiguration is not needed. This can be used to forcefully pause the migration after a successful admit phase.

Start at Clicking the empty field opens a calendar application to schedule the import of volumes at a predefined time. The scheduling granularity is days, hours, and minutes. See the Scheduled migrations section in this white paper for more details.

Page 18: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 18

Figure 9. The Advanced options menu with four extra fields

SSMC 3.3 introduces a checkbox on the Create Host page that allows users to add a host to the selected system and to all other members of its federation. Figure 10 shows this option.

Figure 10. The checkbox used to add a host to all systems in the federation simultaneously

Page 19: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 19

When users build a federation, they can select only Smart SAN enabled systems. Figure 11 shows this checkbox.

Figure 11. The checkbox to select only Smart SAN enabled systems to build a federation

With Smart SAN, the SAN zoning required for the federation is created automatically; otherwise zoning must be created manually. When you are editing an existing federation, you cannot enable the Smart SAN checkbox if any systems are not reachable or are filtered out based on the selected view. You also cannot select the checkbox if one of the systems does not have a Smart SAN license or if the host ports are not enabled for Smart SAN.

Managing federated systems Federated 3PAR systems are installed and managed in the same way as nonfederated ones. For example, the following operational activities are performed in the same way as on nonfederated systems:

• Creating hosts, CPGs, volumes, and snapshots

• Tuning CPGs with Dynamic Optimization

• Tiering with Adaptive Optimization

• I/O balancing with Priority Optimization and Adaptive Flash Cache

• Reclaiming space for Thin Provisioning

• Replicating with Remote Copy

The same holds for hardware maintenance activities such as adding, removing, and replacing disks and disk enclosures, controller nodes, and interface cards. All best practice guidance for managing a 3PAR system and executed applications apply to federated systems.

Unlimited data mobility SSMC is the orchestration tool for 1:1, 1:M, N:1, and N:M bidirectional data mobility between storage federation members. SSMC imposes no limit on the number of migrations executed concurrently between federation members. Volumes can be moved back and forth between the same or different federation members an unlimited number of times. There is also no limit to the number of unidirectional import operations from migration sources into a federation member.

Very large volumes With HPE 3PAR OS 3.3.1, the supported maximum size for fully and thinly provisioned volumes increased from 16 TiB to 64 TiB. SSMC 3.3 Peer Motion supports migrating volumes of this size. IBM XIV supports volumes of approximately 160 TiB. When configured as a migration source to a federated system, volumes up to 64 TiB can be migrated to a 3PAR running HPE 3PAR OS 3.3.1 using SSMC 3.3. With HPE 3PAR Online Import Utility, volumes on Hitachi Vantara, Dell EMC, and IBM XIV source arrays have a maximum volume size of 16 TiB.

Page 20: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 20

Migration resilience The Actions menu of the Peer Motions screen features a Retry button to restart the migration for volumes that did not completely transfer. A restart moves an entire volume from the beginning of its migration. Retries are successful only if the original error was transient or fixed.

Configuration synchronization SSMC can import and synchronize storage system configuration data between federation member systems and between migration sources and their federated systems. This synchronization includes user accounts, hosts, domains, and the parameters for LDAP, SNMP and NTP. Keeping data synchronized is vital for unrestricted data migration across federation members.

SSMC 3.3 offers the synchronization of multiple remote syslog servers and security log configurations between federation members.

Automatic host rescan For an online migration between 3PAR storage systems, SSMC performs the typical admit phase, making the source’s migrating volumes visible to the destination 3PAR. In HPE 3PAR OS 3.2.1 and earlier, Peer Motion migration workflow requires a manual rescan on the host or all hosts in the cluster after the admit phase. This manual rescan discovers source volumes on the new paths from the host to the 3PAR destination. This process necessitates coordination between host and storage administrators. HPE 3PAR OS 3.2.2 and later offers an automatic rescan feature to facilitate the one-click online migration to the 3PAR destination. At the end of the admit phase, the automatic rescan process issues a SCSI Loop Initialization Protocol (LIP) command to the host’s Fibre Channel ports. This causes the host to update its SCSI layer to reflect the devices currently on the bus. The LIP command completes in seconds and is successful if subsequent SCSI READ, WRITE, or PROUT commands are seen over each of the new destination paths to the Peer volumes. These SCSI commands indicate ongoing host I/O via the new paths to the source volumes. If these commands are observed, SSMC infers a successful rescan on the host and continues by executing the volume import. This means Peer Motion completes the entire migration operation without user intervention.

If these commands are not observed, SSMC prompts for a manual rescan. After this rescan, you can select the Resume option from the Actions menu to proceed. Then SSMC performs a sanity check to ensure the rescan happened by looking for SCSI TUR, INQ, MAINT_IN, READ, WRITE, READ_CAPACITY, and other commands over each new path to the Peer volumes. If successful, the volume import task starts. If none of these checks returns success, the SSMC does not start the data transfer but again prompts for a manual disk rescan specifying a 3PAR port number. After the rescan, click Retry in the Actions menu of the Peer Motions page to proceed to the import phase.

All operating systems listed in the Peer Motion Bidirectional Data Mobility Host Support Matrix except AIX are certified for automatic rescan. This matrix is available from HPE Single Point of Connectivity Knowledge (SPOCK). The Path Verify Enabled MPIO setting on Windows stand-alone and clustered host systems must be enabled for automatic rescan to be successful.

Minimally disruptive migration always requires a manual rescan after the SAN zoning changes.

Scheduled migrations Although the impact of Peer Motion migrations on host applications is marginal, some customers prefer to execute the data transfer during a quiet period. For storage administrators, this usually requires working evenings or weekends. SSMC 3.1 or later combined with a destination 3PAR array on HPE 3PAR OS 3.3.1 offers the option to define a future start time for data transfers. To set a start time, select the Advanced options checkbox in the Peer Motion workflow screen and click the Start at field to provide the details (see Figure 9). This delayed migration start is available for online and offline migrations and for federation members and 3PAR migration sources including legacy 3PAR systems. Scheduling import tasks from third-party systems is not possible.

In a typical scenario, the storage administrator selects the host or individual volumes and defines the Peer Motion by selecting the destination system, destination CPG, priority, and so on. If nothing is entered in the Start at field, the Peer Motion migration initiates immediately upon clicking the Start button. When you click the Start at field, a calendar pops up (see Figure 12).

Page 21: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 21

Figure 12. Calendar interface for defining the start time for a Peer Motion migration

Select the year, month, and day. Then slide the Hour and Minute triangles to change the migration’s start time and click Done. The date and time should be within 365 days of the current day. Clicking the Start button initiates the admit phase. For an online migration, the user is prompted for a manual rescan of the SCSI bus on the migrating host or hosts if I/O activity is low or absent. After the manual rescan, the import tasks are created and scheduled with their task ID on the destination 3PAR; the delayed start can be hours or days later. When the volume import is scheduled but not yet started, host access to the volumes occurs over the destination 3PAR and the Peer links. The scheduled import task is listed in the SSMC Schedules screen. You can specify a previous date and time but when you click Start for the migration, the error in Figure 13 pops up, requesting that you select a future date and time.

Figure 13. Error message requesting that a future date and time be specified for a scheduled Peer Motion migration

The time specified in Figure 12 is adjusted for the time zone installed on the destination 3PAR system; a migration time of one hour in the future is executed in one hour on the destination system regardless of its local time zone.

Concurrent Peer Motion tasks SSMC enforces no hard limit on the number of Peer Motion operations that can be started between federation members and between a federation member and a migration source. Volume import is controlled by an import task that executes within the destination 3PAR environment. Up to nine import tasks can execute concurrently; the rest are queued and become active when an ongoing task completes. There is no maximum number of import tasks that a destination system can queue. Queueing import tasks are ordered by priority (high, medium, and low). Within each priority level, tasks are ordered by time of creation with the oldest one executing first. Ongoing import tasks are not preempted for a higher priority task.

Page 22: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 22

Migration throughput control HPE recommends executing volume migrations between systems during a quiet period for I/O on the migrating volumes, on the migrating host, and on the hosts sharing the host ports of the Peer links (if any). This reduces the impact of the data transfer to the involved applications. SSMC 2.2 and later allows users to control the impact directly; Peer Motion offers an option to specify a priority for the import task of a volume. Three levels of prioritization are available that directly affect the data transfer rate over the Peer links. A lower priority effectively throttles the Peer Motion migration traffic, redeeming I/O bandwidth to applications sharing the host ports. The lower priority lengthens the total transfer time, but this is acceptable given that data transfers happen online. When the highest priority is selected for volume import, the benefits of the destination 3PAR to the application become available sooner. The prioritization option permits the storage administrator to schedule migrations during working hours. The priority of a Peer Motion import task is specified when the migration is created, but can be changed in the SSMC for an ongoing migration for optimal throughput or resource consumption change. Refer to the Import task priority section in this white paper for more information.

Peer links over 16 Gb/s Fibre Channel Although the impact of data migration upon hosts is negligible, most customers prefer to perform data migrations in the least amount of time possible. With Storage Federation, 16 Gb/s Fibre Channel ports for the Peer links in Peer Motion and Online Import are supported. These ports are integrated in the controller nodes of the HPE 3PAR StoreServ 8000, 9000, and 20000 series, and are available on PCIe plug-in cards in the controllers of these systems in addition to the HPE 3PAR StoreServ 7000 series. End-to-end 16 Gb/s SAN switches double the Peer link speed compared to the previous industry standard of 8 Gb/s, which completes the workload migration in half the time. The 16 Gb/s ports are compatible with 8 Gb/s Fibre Channel ports when creating Peer links for a storage federation. The faster Peer links are compatible with 8 Gb/s and 4 Gb/s Fibre Channel ports on migration sources.

Changing the transport protocol for a Remote Copy target In earlier SSMC versions, the transport protocol for a Remote Copy target that is migrated to new destination had to be the same before and after the migration. With SSMC 3.3, you can change from Remote Copy over IP (RCIP) to Remote Copy over Fibre Channel (RCFC) or vice versa as part of the migration operation. For example, when Remote Copy is set up with the RCIP transport protocol between the source and target, Peer Motion can migrate the target to a new one and enable RCFC as the Remote Copy transport protocol between the original source and the new target.

Simplified zoning At larger customer sites, the storage team and the SAN zoning team are different entities that operate according to incoming work orders. Legacy Peer Motion requires SAN zones for the Peer links to be created during setup, with modifications needed when changing the source, the destination, or the direction of data flow. With Storage Federation, you must create and enable SAN zones between the intended federation members when setting up the federation or when adding a system to the federation. Multiple initiators to multiple targets per SAN zone are supported for federation members. This reduces the number of zones required in a federation topology to just two, dramatically simplifying the initial zoning setup and the changes needed when adding or removing arrays to and from the federation. The SSMC wizard for creating a federation conveniently produces the list of WWNs that must enter into each of the two zones.

SSMC 3.1 and later simplifies this process further with Smart SAN, which eliminates the need to create zones manually on the fabric switches. With switches enabled with Smart SAN, the 3PAR system interacts with the fabric to create the required zones. Refer to the Zoning section in this white paper for more details.

Data transfer optimization Bidirectional data mobility between volumes of any provisioning type on any disk tier type is supported. The conversion between provisioning types during the migration is executed on the destination system. When a deduplicated volume is migrated using Peer Motion in the SSMC, the volume is rehydrated inline before traveling over the Peer links to the destination system. The deduplicated volume is not expanded to its entire nondeduplicated size before the migration, meaning the transfer does not require space on the source system. If the destination volume is deduplicated as well, its final size might differ from the source volume because the deduplication dictionary on the destination can be different from the source. HPE 3PAR OS 3.3.1 brings compression to volumes in CPGs with SSDs as storage. Peer Motion in SSMC 3.3 can migrate compressed and noncompressed volumes on the source 3PAR system to compressed and noncompressed volumes on the destination system. Volume data always travels uncompressed over the Peer links, even if the source and destination volumes are compressed.

Page 23: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 23

Peer Motion is Thin Provisioning aware, so only the used space in a thin volume is migrated from the source to the destination. Blocks of zeros in fully provisioned source volumes are transferred but are not written on the destination disks if the destination volumes are thin provisioned. 3PAR Efficient Import offers another transport optimization. Blocks on disk that were never touched by the host operating system are not transferred over the Peer links. The benefit is faster volume migration thanks to reduced SCSI read requests. This applies to thin provisioned source volumes that are up to 75% full.

Resilience during migration During the Peer Motion import phase, all host application writes travel over the 3PAR destination system and over the Peer links to the source volumes. This is a crucial step to maintaining source volume integrity during the import process. This step facilitates rollback in case of an incident. If the region where the I/O landed was already migrated to the destination, the data is also written to the destination volume. This boosts resilience but increases the amount of double writes as the migration completes.

The handling of host application read requests during import depends on the HPE 3PAR OS version. In HPE 3PAR OS 3.2.1 and earlier, all host read requests are forwarded to the source array. Starting with 3.2.2, Peer Motion uses an optimized read algorithm that serves read requests from the destination array for already migrated regions. This avoids the extra hop on the SAN from the 3PAR destination array to the source array over the Peer links without compromising data integrity. The feature produces lower read latencies, resulting in progressively improved response time to host I/O as volume migration evolves. This optimization can compensate for the potential rise in application service time because of the increasing double writes noted earlier in this section.

Integration with HPE 3PAR Remote Copy Peer Motion in the SSMC can migrate a Remote Copy group between two 3PAR systems to a third system. The Remote Copy group can be located on the primary or secondary system. All volumes in the Remote Copy group must be migrated simultaneously. Only one Remote Copy group can be selected in a migration definition. Synchronous and asynchronous periodic Remote Copy are supported, but asynchronous streaming is not supported. Coordinated snapshots of the migrating volumes aid in substantially shrinking the time required for syncing the primary and new secondary systems. At the end of the admit phase, the existing Remote Copy setup is replaced with a new one pointing to the third system. This new Remote Copy target between the Peer Motion destination and the remaining Remote Copy member must be set up by the administrator at the beginning of the process.

SSMC handles the Peer Motion migration, tears down the existing Remote Copy group, admits its volumes to the new one, and starts the Remote Copy synchronization. The destination system involved in the Peer Motion migration must be a federation member. The other systems involved can be a federation member or a migration source to the destination system in the federation. SSMC does not support Peer Motion to a third system when the Remote Copy group is set up with Peer Persistence. However, a Remote Copy group under Peer Persistence can be migrated from the HPE 3PAR CLI. The steps for this are documented in the Reference section of the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide. Peer Motion of Remote Copy groups without or with Peer Persistence requires all three 3PAR systems to run HPE 3PAR OS 3.2.2 or later.

Notes Remote Copy between systems that are in different federations is supported.

Migrating a Remote Copy group between members of a different federation must be executed using the HPE 3PAR CLI.

LUN ID conflict resolution Volumes migrated by Peer Motion from legacy and current 3PAR source system models retain their LUN ID on the destination array. This can cause a conflict if the LUN ID of the migrating volume on the source is already in use on the destination array for the host. SSMC discovers this conflict and silently enforces the next available ID for the volume, starting from zero. The same restriction applies when migrating a volume from an IBM XIV migration source. If an alternative LUN ID was applied, the federation.log file (see the Logging section in this paper) contains the string Conflicting LUN: followed by the volume and host name. This entry does not show the new LUN ID; you can find it in the SSMC or by using the 3PAR CLI command showvlun.

Page 24: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 24

Support for migrating a VVset subset Volumes forming a Logical Volume Manager (LVM) structure or a database layout can be placed in a VVset. A typical use case of VVsets is to take a consistent point-in-time snapshot of its member volumes. A VVset with the same name as the source system is created on the destination system if it does not already exist. The volumes that were migrated are removed from the source VVset; the option to keep them on the source system is not available. When the last volume in the VVset is migrated, the VVset is removed from the source system. With SSMC 3.0 and later, a subset of volumes in a VVset can be migrated to a destination 3PAR system that is a member of a federation. Migrating only a subset of the member volumes of a VVset is considered an advanced feature that should be used with care. The subset of volumes can optionally be migrated inside a consistency group.

Support for migrating a subset of volumes on legacy 3PAR With HPE 3PAR OS 3.1.3 or later, SSMC 3.3 enables users to migrate a subset of volumes on a legacy 3PAR migration source. To do so, from the SSMC, select one or a subset of volumes on the legacy 3PAR. Then from the Peer Motion workflow page, change the Migrate selected volumes only button to Yes. The host where the volumes are exported to should have an ALUA persona. This makes single volume migration from legacy 3PAR systems possible.

Autonomic object creation for third-party source systems SSMC 3.3 introduces the option to add a VVset and host set on the destination system when creating a migration from an IBM XIV storage system configured as a migration source to a federated system. Figure 14 illustrates these options in the red rectangle. With these settings, the migrated hosts and volumes will land in the specified host and volume set on the destination 3PAR. These options reduce the postmigration work required on the destination 3PAR.

Figure 14. Optional specification of a volume set and a host set when migrating from an IBM XIV

Page 25: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 25

Support for FIPS 140-2 mode of compliance for HPE 3PAR Online Import Federal Information Processing Standard (FIPS) 140 defines a list of approved algorithms and procedures used to encrypt data in products. In FIPS mode, it is not possible to programmatically fetch the remote host or array certificate. The user must obtain the certificate from another channel and paste it into the product. SSMC 3.3 supports FIPS 140-2 compliance mode for Online Import. The Online Import workflow offers a panel for adding the array certificate if FIPS mode is enabled in the SSMC.

Zoning HPE recommends that Storage Federation members be interconnected by the Peer links over dual, redundant SAN fabrics. Peer Motion in SSMC supports two pairs of unidirectional Peer links operating in opposing directions between any two 3PAR systems in the federation. This means a federation member in a Peer Motion setup acts as a source and as a destination simultaneously.

When interconnecting federation members, you must create eight virtual Peer ports per physical Peer port on the destination system and zone them with the physical Peer port and a host port on the source system. The virtual Peer ports are required in the event that the volumes under migration are exported to a Windows cluster. Even if the migrating volumes are not exported to a Windows system or cluster, all eight virtual Peer ports must be present. The SSMC converts host ports to Peer ports and creates the virtual ports per Peer port when the system is added to the federation. Alternatively, this can be performed manually using the HPE 3PAR CLI.

When adhering to the single initiator, single target (SIST) concept, a substantial number of SAN zones are needed. In this case, a two-member federation requires 36 zones, a three-member federation requires 108 zones, and a four-member federation requires 216 zones. Federation and the SSMC allow the presence of multiple initiators and multiple targets in a single SAN zone. Thanks to this, only two zones are needed to interconnect two, three, or four federation members. This greatly simplifies the zoning topology when a new federation is created or when a member enters or leaves an existing federation. Figures 15 and 16 show orange and blue zones that contain the WWNs of the Peer, virtual Peer, and host ports that compose each Peer interconnect for use in a federation. In the figures, H denotes a host port and P denotes a Peer port.

Figure 15. Peer link zoning layout for a two-member federation—only two SAN zones are created for bidirectional data mobility

Page 26: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 26

Figure 16. Peer link zoning layout for a four-member federation—only two SAN zones are created for bidirectional data mobility

The Peer and host ports for the Peer links of an even-numbered controller node on the source and destination systems enter one zone; the other zone is composed of the Peer and host ports located on the odd-numbered controller node. Conveniently, SSMC lists the WWNs that should go into each of the two zones after the federation members are selected. This list includes WWNs of the eight virtual N-Port ID Virtualization (NPIV) Peer ports. The number of WWNs per federation zone scales with the number of federation members. In a two-member federation, every zone has 20 WWNs (9+1+9+1), as shown in Figure 15. For a three-member federation, the WWN count per zone increases to 30. The four-member environment shown in Figure 16 has 40 WWNs in each of the two zones.

Migration sources requires private SAN zones between the source and the federated destination array, so you cannot add the host ports of the migration source to the federation zones. The Peer ports used for federation on the destination system are reused to build the Peer links to the migration source; no extra pair of Peer ports should be created on the federation member for the Peer links to a migration source. For legacy 3PAR systems configured as a migration source, the physical Peer port and its eight virtual Peer ports on the federated system need to be zoned to a host port on the source 3PAR. Multiple initiators to multiple targets per zone are supported. For third-party migration sources, zoning the host port on the source with the physical Peer port on the destination for both fabrics is sufficient. HPE prefers 1:1 SIST zoning for migration sources to improve fault isolation and reduce bandwidth interference. Also for migration sources, SSMC shows the WWNs that must be included in each Peer link SAN zone. The same host ports on the source system can connect to the Peer ports on different federated destination systems.

The SSMC Federations page includes a Health section, which indicates the zoning status for federation members and migration sources. If part of the zoning is not correct, the SSMC indicates the missing zones. After the issue is resolved, the health state turns to Normal in seconds. You do not need to reinitiate the zoning discovery or remove and re-add the storage system.

Smart SAN 2.0, integrated with 3PAR SSMC 3.1 and later, provides automated management control for SAN zoning. The Target Driven Peer Zoning (TDPZ) technique allows the 3PAR storage system to perform Fibre Channel zoning tasks. The array communicates with Smart SAN enabled Fibre Channel fabric switches and creates the zones without SAN administrator intervention. TDPZ reduces the number of configuration steps required to provision new storage by up to 90%. This dramatically simplifies Fibre Channel zoning for federation, eliminating all manual configuration effort on SAN switches.

Page 27: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 27

In SSMC 3.3, Smart SAN automatically zones a host when it is added to the destination 3PAR during Import configuration and Sync federation operations from legacy 3PAR and IBM XIV storage systems. The host is created with its WWNs and correct host persona value. This requires a Smart SAN license on the destination 3PAR and Smart SAN enablement of the host ports intended for use. The SAN switches require Brocade Fabric OS 8.0.1 or later because you must use the HPE 3PAR CLI command showportdev uns to identify initiators on fabrics. The user can select the target ports for automatic zoning when importing the host definition. Smart SAN saves time compared to manual zoning and eliminates typing mistakes.

Technical prerequisites Prerequisites to creating an HPE 3PAR Storage Federation for bidirectional data mobility are:

• An instance of SSMC 2.2 or later to create and manage the federation and initiate mobility

• One or more 3PAR systems managed by the same SSMC instance

• An account with the Super role on each system intended to join the federation

• A Fibre Channel SAN fabric (dual redundant fabrics are preferred) for Peer link and host interconnects

• NPIV enabled on the SAN ports connecting Peer ports

Prerequisites for a 3PAR system to enter an HPE 3PAR Storage Federation as a federation member are:

• HPE 3PAR StoreServ 7000, 8000, 9000, 10000, or 20000

• HPE 3PAR OS 3.2.2 or later

• An available Fibre Channel port on one controller node and another port on its partner node, both configured in Peer connection mode

• A Fibre Channel port on one controller node and one on its partner node, configured in Host Connection mode as a target; can be shared with other hosts

• A valid Peer Motion license, which is part of the HPE 3PAR All-inclusive Multi-System Software licensing scheme (see the Licensing section in this white paper for more information)

The following prerequisites apply to migration sources that attach to a federated system:

• Source is:

– 3PAR F-class or T-class running HPE 3PAR OS 3.1.2 MU2 or later

– 3PAR StoreServ 7000 or 10000 running HPE 3PAR OS 3.1.3 or later

– 3PAR StoreServ 8000 or 20000 running HPE 3PAR OS 3.2.2 or later

– 3PAR StoreServ 9000 running HPE 3PAR OS 3.3.1

– Supported IBM XIV system satisfying minimum firmware version (see SPOCK for more details)

• A Fibre Channel port on one controller node and another port on the partner node of the migration source, configured in Host Connection mode; can be shared with other hosts

• A Fibre Channel SAN fabric (dual redundant fabrics are preferred) for Peer port and host port interconnects

• Two SAN zones for Fibre Channel interconnect to the federation member

A 3PAR array running HPE 3PAR OS 3.2.2 or later without a Peer Motion license cannot enter a storage federation but can send its data to federation members when attached as a migration source.

Page 28: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 28

The following prerequisites apply to SAN switches attached to federated systems, their migration sources, and hosts:

• A supported Fibre Channel switch model running supported firmware; Storage Federation enforces no particular model or firmware beyond what is supported for 3PAR systems (see the SAN Design Reference Guide for more information)

• NPIV enabled on the ports of the SAN switch connected to the Fibre Channel Peer ports

• For Smart SAN:

– HPE 3PAR B-series type switches with firmware Fabric OS 8.0.1 or later

– 16 Gb/s or faster small form-factor pluggable (SFP) interfaces

Exactly two Peer ports are supported per system in a federation, regardless of the number of controller nodes or Fibre Channel interfaces in a system. Online Import Utility (OIU), Peer Motion Utility (PMU), and Storage Federation use the same Peer ports on the federated system. It is not supported nor necessary to create a second set of Peer ports on the federation member for use by OIU or PMU. The speed of the Fibre Channel ports for the Peer links do not need to be equal on the source and destination. Controller numbers selected for the Peer ports do not need to match across federation members. OIU and PMU volume transfers from migration sources to federation members can execute concurrently with bidirectional Storage Federation migrations between federation members. It is also possible to migrate volumes with PMU when the source and destination systems are in a federation.

Although conceived for local data center distances, federation does not preclude involving remote systems. No maximum distance or latency is imposed for Storage Federation, Peer Motion, or Online Import operations between local and remote systems. Migrations are possible as long as Fibre Channel SAN zones between the two sites can be created with the NPIV option and the source and destination systems can log in into the migration tool.

It is best practice to use two simplified SAN zones for a federation of 3PAR systems. SIST zoning is supported. Smart SAN requires 16 Gb/s Fibre Channel adapters in the participating federated 3PAR systems. Automated federation zoning using Smart SAN requires that all federation members run HPE 3PAR OS 3.3.1 and have a valid Smart SAN license installed. All host ports in the federation must have Smart SAN enabled and be cabled to Smart SAN enabled switches (see SPOCK for a list).

The Path Verify Enabled MPIO setting must be enabled on Windows stand-alone hosts and clusters with volumes planned for migration. This setting facilitates automatic rescan and single-volume migration. With T10 Data Integrity Field (DIF) enabled on the host bus adapter (HBA) of a host under migration, Peer Motion is mandatorily minimally disruptive or offline only. Users must disable T10 DIF on the host’s HBAs in order to use online migration.

Supported federation configurations All intended federation members must be managed by the same SSMC instance. Federation members require HPE 3PAR OS 3.2.2 and later. Bidirectional migration is possible between any pair of federated systems, including from an HPE 3PAR OS 3.3.1 system to an HPE 3PAR OS 3.2.2 system. 3PAR migration sources must be on the same or an earlier HPE 3PAR OS version as the destination; this rule does not consider the MU version. This limitation does not affect legacy 3PAR systems; they always run an HPE 3PAR OS version that is earlier than a federated system’s version.

A Storage Federation can include between one and four 3PAR member systems. Each system can be a member of only one federation at a time. With SSMC 3.3, a federation member can be added as a migration source to one or more federations. This provides unidirectional migration capabilities between federation members and helps to overcome the limit of four systems per federation.

SSMC can be used to create and manage multiple federations. With a maximum of 32 systems supported in SSMC 3.0 and later, you can administer up to 16 two-system federations, up to 10 three-system federations, and up to 8 four-system federations. Any mix of these groupings within the limits is allowed.

A migration source can connect to multiple federated systems, which do not need to be part of the same federation. A migration source on HPE 3PAR OS 3.3.1 (any MU) can be attached to a federated 3PAR system on HPE 3PAR OS 3.2.2 MU1 or later.

Page 29: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 29

In SSMC 3.2 and later, for a single-system federation, up to eight legacy 3PAR and third-party systems can be added as migration sources for unidirectional migration. In a two-system federation, up to seven migration sources can be added to one system. The link with the second federated system counts as one. By extrapolation, a three-system federation can have up to six migration sources per member system and a four-system federation can have up to five per member system. Table 3 lists the current storage federation limits.

Table 3. Current maximums for HPE 3PAR Storage Federation

Description Maximum value

Number of member systems in a storage federation 1–4

Number of migration sources attached to a storage federation member 0–8

Number of migration sources attached to a storage federation 0–20

Number of member systems and migration sources in a storage federation 1–24

Having eight attached migration sources (row 2) requires a single-member storage federation. In this setup, the system inside the federation system has no migration destinations. Having 20 attached migration sources (row 3) is valid when the federation has four members. The maximum value of 24 member systems and migration sources (row 4) is imposed by the limits listed in rows 1 and 3.

HPE 3PAR File Persona volumes cannot be migrated. This is a limitation of Peer Motion, not of SSMC or Storage Federation. Storage containers and virtual volumes (VVols) available in VMware vSphere 6.0 and later cannot be migrated between federated systems or from a legacy 3PAR or third-party migration source to a federated system.

Orchestrating data mobility Table 4 summarizes the orchestration tools that provide data mobility from source to destination systems.

Table 4. Migration orchestration tools available for different types of supported systems

Tool Source system type Source system HPE 3PAR OS version

Destination system HPE 3PAR OS version Capability1

SSMC, source and destination in federation

7000, 8000, 9000, 10000, 20000 3.2.2 and later 3.2.2 and later Bidirectional N:M

SSMC, destination in federation 8000, 20000,

9000 3.2.2 and later 3.2.2 and later Unidirectional 1:1, N:1, 1:M

SSMC, destination in federation 7000, 10000 3.1.3 and later 3.2.2 and later Unidirectional 1:1, N:1, 1:M

SSMC, destination in federation F- and T-class 3.1.2 MU2 and 3.1.3 3.2.2 and later Unidirectional 1:1, N:1, 1:M

SSMC, destination in federation IBM XIV See SPOCK 3.2.2 and later Unidirectional 1:1, N:1, 1:M

PMU, destination in federation or not

F- and T-class, 7000, 8000, 9000, 10000, 20000 3.1.2 MU2 and later 3.1.3 and later Unidirectional 1:1, N:1, 1:M

OIU, destination in federation or not

Selected EMC, Hitachi Vantara, IBM models, see SPOCK See SPOCK See SPOCK

• Unidirectional 1:1, N:1, 1:M on EMC, IBM

• Unidirectional 1:1 on Hitachi Vantara

1 See Table 9 in the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide.

Page 30: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 30

SSMC is the mandatory tool for bidirectional data mobility between federated systems. It is the preferred tool for configuring 3PAR legacy and current systems as migration sources to a federation. PMU is a universal command line tool for unidirectional migrations. See the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide1 for more information. Legacy F- and T-class systems can have their data relocated using SSMC and PMU. PMU and OIU are not federation-aware and therefore cannot create or manage a federation.

Consult SPOCK for additional prerequisites regarding supported HPE and third-party storage systems.

Best practices This section contains methods and techniques that result in superior business outcomes when using Storage Federation.

Migration preparation SSMC software should not be installed on a host with volumes to be migrated. As part of a minimally disruptive migration, the host must be shut down, preventing access to the SSMC to start the actual data transfer.

Migrating application volumes between 3PAR systems is extremely simple with Storage Federation. However, it should not preclude storage administrators from studying the migration’s impact in-depth. Every component in the destination 3PAR influences the performance of the application migrated to it. The following parameters should be compared between the source and the destination system to estimate the approximate performance of an application:

• System type (F-class, T-class, and HPE 3PAR StoreServ 7000, 8000, 9000, 10000, or 20000)

• Number of controller nodes and associated onboard cache

• Disk technology (SSD, 10,000 or 15,000 RPM Fast Class, SAS, or NL)

• Number of disk drives in the source and destination CPG

• RAID level of the source and destination CPG

• Use of deduplication and compression

• Use of volume tiering by Adaptive Optimization

• Use of Adaptive Flash Cache, its size, and the volumes using it

• Use of Priority Optimization and the rules for Virtual Volume Set, Domain, and System target types

• The I/O throughput to hosts (4, 8, or 16 Gb/s for Fibre Channel and 1 or 10 Gb/s for iSCSI or Fibre Channel over Ethernet) and I/O size

• Number of Fibre Channel paths for the host connection to the source and the destination storage system

Although the destination system might be fit for substantially improving the performance of the migrating application, its current workload might eventually preclude the migration of a demanding application. For at least a week before a migration, use HPE 3PAR System Reporter to collect the footprint of the application on a 3PAR source system regarding CPU and cache load, number and size of IOPS to disk, I/O to the SAN and network, service time, wait time, and queue length. For IBM XIV migration hosts, use the IBM XIV Top tool, integrated in the IBM XIV GUI, to obtain real-time performance graphs per volume or per host for IOPS, latency, and bandwidth. Historical statistics over an arbitrary period (up to one year) can be obtained using XCLI, the command line tool for managing the IBM XIV. EMC and Hitachi Vantara storage arrays have software subsystems to record performance over a longer period. The same monitoring exercise should be completed during the same time frame on the destination system. HPE Pointnext estimates front end and back end IOPS and throughput capacity, so spare cycles on the destination 3PAR system can be determined. With the collected information, the storage administrator can make an informed decision regarding whether migrating an application to another federation member is beneficial to the application and viable for the receiving system.

SSMC can synchronize objects such as LDAP, SNMP, NTP, remote syslog servers, users, domains, and domain sets. It can also synchronize hosts and host sets between all federation members and from a migration source to a federation member. All items per object are passed on to all federation systems; no selective synchronization is possible. Hosts that are not zoned to the destination 3PAR are not created unless a Smart SAN ecosystem is present.

Page 31: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 31

Although the host under migration can be connected to the source and destination storage systems by using a different number of Fibre Channel ports, this configuration is not recommended. Application performance might degrade after migrating volumes to a federation member with fewer Fibre Channel ports connected to the host.

HPE highly recommends synchronizing time between controller nodes for all 3PAR systems in a federation, HPE and third-party systems configured as migration sources, and the SSMC instance. HPE advises using a company’s internal or external time server for this synchronization.

MDM in SSMC supports Windows clusters connected to an IBM XIV storage system that is configured as a migration source to a federation. The number of cluster members is limited only by Microsoft. The same rule applies to Windows clusters migrating from a supported EMC, Hitachi, or IBM XIV storage system using Online Import Utility.

Do not make changes to array credentials when volumes are under migration. The next authentication will fail and the migration will halt. You cannot edit a storage federation setup while a Peer Motion migration is in progress, even when it is paused for a host rescan. You should add or remove a 3PAR system to or from a federation when no migrations are taking place. A system can be removed from a federation if no more migration sources are connected to it.

The region mover subsystem in HPE 3PAR OS transfers regions (blocks of data) from one place to another on the same physical disk or between different disks. This subsystem is shared by Dynamic Optimization, Adaptive Optimization, snapshot promotions, Peer Motion, and Online Import. Up to nine region mover tasks execute in parallel for these software titles. This means that every executing region mover task reduces the maximum number of nine simultaneous volume import tasks by one on the destination array, lowering the migration throughput. Tunesys results are compromised when running Peer Motion to a CPG subject to tunesys. Regions written “late” by Peer Motion in the CPG under tunesys stay in the “wrong” CPG, forcing a second tunesys after Peer Motion ends. HPE does not recommend start tune tasks for a volume, CPG, and system that can overlap with Peer Motion operations. Also, disable any scheduled Adaptive Optimization tasks on the migrating volumes that might execute during the migration.

If the migrating object is a Remote Copy group, the Remote Copy target between the Peer Motion destination and the Remote Copy member that stays in service must be set up by the administrator before starting the migration from SSMC.

Federation zoning Two options exist for zoning systems for federation connectivity. As a first option, the WWNs of the physical and virtual Peer ports and the host ports comprising the Peer links are assembled manually in two SAN zones. This significantly simplifies zoning. HPE recommends taking advantage of this streamlined zoning to mitigate the complexity of interconnects. As a second option, Smart SAN integrated in HPE 3PAR OS 3.3.1 and later can generate and issue commands to SAN switches to create, modify, and delete SAN zones for a federation setup. Smart SAN decreases the work of creating or modifying the zones on the SAN switches between federated systems to just selecting the host and Peer ports in the SSMC. Consult SPOCK for the list of SAN switches supported by HPE 3PAR OS and Smart SAN. SSMC 3.3 allows Smart SAN to be used for automatic zoning for HPE 3PAR and third-party systems attached as a migration source.

It is easy to add or remove systems from a federation. When a system enters the federation, the SAN zoning topology between systems must be updated on the switches. Without Smart SAN, the appropriate WWNs must be added manually to the two federation zones. Thanks to the simplified zoning concept, no new zones need to be created. To eliminate zoning errors, use the SSMC to retrieve the WWNs per zone when adding the system to the federation (or after the system is added). When a system leaves the federation, it is a best practice to remove the WWNs that are no longer needed in the federation zones. Look up the list of WWNs per zone in the SSMC after the system has left the federation to update the SAN zones. With Smart SAN, the federation zones are updated automatically when a system enters or leaves the federation. Zoning recommendations are not displayed if Smart SAN is enabled. You can convert existing federations to use Smart SAN by removing the manually created SAN zones first.

The zones for hosts connected to federation systems can be the multiple-initiator, multiple-target type. This reduces the number of zones to match the number of fabrics used; SIST zoning is also supported. Hosts should be zoned to every federation member for seamless, one-click data migration without any more SAN zoning manipulations.

Page 32: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 32

There is no maximum supported geographical distance or latency between federation members and their migration sources. Although Peer links are dedicated to migration data transfer, routing this traffic over shared WAN links between sites means it must compete for bandwidth, which slows the migration. This makes available bandwidth in WAN setups more concerning than latency. HPE recommends studying bandwidth usage of an intersite WAN Fibre Channel link over a week or longer to determine the throughput and latency over time and find periods of lower traffic (if any) for starting a Peer Motion or Online Import migration. The host with migrated LUNs will access the destination 3PAR array over the WAN link. This might incur performance issues for the executed applications. This is specifically the case for applications that were on SSDs on the source system. Physically moving the application host to the same data center as the destination 3PAR alleviates this problem.

Federation creation and management SSMC is a server-based management console accessed by using HTML5 through a web browser. Even large or international workgroups do not need more than one SSMC instance if the number of 3PAR systems does not exceed 32. A secondary SSMC instance for disaster recovery purposes can be implemented. Thanks to the auto-discovery feature in SSMC 3.0 and later, a federation created on a primary SSMC instance is visible on a secondary instance. HPE does not support managing a federation and a migration from an SSMC instance different from the one on which the object was created. This is because when a Peer Motion workflow is initiated, it is internally tagged with an ownership connected to its SSMC instance. Any action required for this workflow (such as Resume or Retry) can only be performed by the owner. Only in the case of disaster recovery when the primary SSMC instance is no longer available should a secondary instance be commissioned. In this case, users can navigate in SSMC 3.3 to the Peer Motions page, select Actions, and then click Takeover. From here users can change the ownership for a workflow.

Peer Motion tasks tabulated in the SSMC Activity screen receive a time stamp local to the destination federation system that is executing the import task. If the real-time clock on the Federation members are not synchronized, the order of executing and executed Peer Motion tasks from different source systems might not reflect reality. This causes confusion for administrators when they analyze when a volume was initiated for migration. Using a common NTP server for all federation members, migration sources, hosts under migration, and the server executing SSMC alleviates this problem.

Multiple federations can coexist and be managed within a single SSMC instance. Merging federations is not possible; you need to remove the federation members from one federation and add them to a second one.

Host and volumes The SSMC cannot migrate the same volume to more than one destination system, because this would present the volume to the same host from two arrays. A clone of that volume exported to a different host can be migrated to another federation member. Use cases for this include presenting a database for mining or a code repository for testing from a second array. An individual snapshot or a snapshot tree hanging off a base volume cannot be migrated between federation members. A base volume and its snapshots are deleted on the source system after migration, unless otherwise specified. A snapshot can be migrated as a base volume by promoting it, which can inflate its size substantially.

SSMC requires the host under migration to exist on the destination 3PAR system; the host is not created automatically at migration time. You can create the host manually or through object synchronization with the source system. To use object synchronization, from the SSMC Federation Configurations page, select Actions and click either the Sync federation or Import configuration option.

For consistent imports, you can place the volumes under migration in an optional consistency group. Only place volumes in a group that must be consistent with each other. HPE recommends limiting the number of volumes in a group to 20 and the total size of volumes in a group to 40 TiB.

One VVset on the source 3PAR system can be migrated at a time using SSMC. The volumes of a VVset are automatically placed in a consistency group for migration; you are advised to keep this selection. To migrate more than one VVset, individually select all volumes in all VVsets to be migrated. The source system VVsets are recreated on the destination system and the volumes land in the right VVset. The migration priority is set to medium by default. If the priority is changed, HPE recommends setting the same priority for all volumes in the VVset.

In SSMC, volumes inside a 3PAR domain on the source system are migrated to a domain with the same name on the destination system. The domain on the destination must exist before you create the migration definition; you cannot migrate a volume from a domain on the source system to the default domain on the destination, or the reverse. IBM XIV volumes added as a migration source to a federation can only be migrated to the default domain using SSMC. Peer Motion Utility 2.2 and Online Import Utility 2.2 introduce the -domain option to migrate volumes directly to an intended domain on the destination 3PAR.

Page 33: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 33

Peer links and Peer volumes The SSMC creates the physical and virtual Peer ports from unused host ports on the participating systems when users construct a new federation or add members to a federation. Alternatively, the physical Peer ports can be created at the beginning of the process by using the SSMC or 3PAR CLI; virtual Peer ports can only be created by using the 3PAR CLI. If the SAN zoning for the Peer links already exists, the SSMC discovers the Peer and host ports in use. If the SAN zones do not yet exist, verify that the proposed Fibre Channel ports for the Peer links are correct. Changes to the proposed Fibre Channel ports can be made in the SSMC. To make edits, from the Actions menu, click the pencil icon to the right of the system name and ports list. Each change immediately adapts the list of WWNs needed to go into the zones for both fabrics in the Fibre Channel Zoning section on the Edit screen. Clicking the OK button adds the new Peer ports to the list and removes the old ones. After zones are created for the Peer links, check the Health section in the Federation Configurations screen to check for the Normal state.

When users are deleting a federation or removing members from it, the SSMC does not convert Peer ports back into host ports. This is because the federation members might be engaged in setups for Online Import Utility or Peer Motion Utility that require the Peer ports to be present. All migration sources connected to a federation need to be removed first before the federation can be deleted.

The host ports that are part of the Peer links on the source system can be shared with other hosts for I/O. This can adversely impact the throughput and I/O latency for other hosts during a volume import. Starting concurrent migrations to different destinations using the same host ports will aggravate the situation. Careful planning to perform migrations during periods of low I/O minimizes the impact to hosts, maximizes the efficiency of Peer links, and shortens migration time. It is generally recommended to select dedicated host ports on the source system for the Peer links, and definitely so if concurrent migrations over the same source host ports to multiple destination systems are planned.

Peer links for the migration sources share the Peer ports in use for bidirectional migration on a federated system. These Peer ports must also be used when adding third-party migration sources.

Volumes cannot be manipulated when they are in Peer provisioning mode. You cannot create snapshots of them, create a physical copy, tune them, bring them into a Remote Copy group, and so on until the migration has completed.

Interrupt coalescence is a widespread optimization technique in network and Fibre Channel HBA cards used to moderate the number of processor interrupts when I/O takes place. HPE recommends disabling interrupt coalescence because the modern, fast controller node CPUs in current 3PAR models handle thousands of interrupts easily with room to spare for other operations. (This is not recommended on a high-volume single-stream I/O.) Converting a host port to a Peer port enables the interrupt coalescence feature automatically, but you can disable it. For 16 Gb/s Fibre Channel interfaces, this parameter is ignored with HPE 3PAR OS 3.2.1 and later.

Import task queueing The 3PAR destination system is in control of scheduling the Peer Motion import tasks submitted to it. You can define multiple migrations without starting the actual data transfer. The actual data transfer can be started during evening hours or over the weekend. There is no limit to the number of import tasks that can be created on the destination 3PAR. Up to nine import tasks execute in parallel. Additional tasks are queued for automatic execution when an executing import task completes. These import tasks can come from a mix of federation members and migration sources. Every 3PAR destination manages its own import task queue. With up to four systems configured in a federation, a maximum of 36 import tasks can execute concurrently.

Application host I/O for executing and queued import tasks passes over the Peer links that carry the import traffic to the source storage system. With dozens of queued tasks, the Peer links’ bandwidth for import data transfer can be substantially reduced by this host I/O, lengthening the migration time for the volumes. As a best practice, consider the implications of submitting a large number of import tasks and try to limit the number.

Import task priority For destination systems on HPE 3PAR OS 3.2.2 or later, every volume import task created receives a priority tag. The default priority is medium; low and high can be selected as well. Every import task has a queue attached listing all 256 MiB regions of a volume that must be migrated. The HPE 3PAR OS region mover manager subsystem defines the order in which the active import tasks are executed. Active high-priority import tasks copy up to six regions listed in the queue before the region mover turns to the next import task. Medium-priority tasks copy up to four regions, and low-priority tasks copy up to two regions before relegating control. The region mover connects to the active tasks in a round-robin fashion. Import tasks to be executed are ordered based on priority. The order of the waiting tasks can get rearranged upon arrival of new tasks: higher priority tasks become active before lower priority ones. Active tasks are never pre-empted.

Page 34: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 34

A task’s priority can be changed after it becomes active. Import task prioritization is also available for data ingesting from volumes on a legacy 3PAR migration source or a third-party system into a storage federation member.

Managing Peer link throughput Peer Motion supports “single-volume” online migration, which allows a subset of virtual volumes exported to a host or a host set to be migrated to the destination system, allowing the source system to service the remaining virtual volumes. This offers a native way to throttle the throughput over the Peer links and hence control the additional read/write load to the migrating volumes on the source and controller nodes servicing them. It is, however, recommended to migrate related volumes (for example, for a database or LVM structure) together and in a consistency group.

Reporting System Reporter in SSMC 3.3 contains no template for historical or real-time IOPS, throughput, and latency for the Peer ports on an array.

For historical data on a past migration, navigate to Reports → Create report, select the Host Port—Performance Statistics template, and select the Peer Motion destination system name. Next click Select objects, click the Add objects button, and highlight both Peer ports to add them to the report. Review the Time Settings and Chart Options and finish by clicking the Create button.

For real-time statistics on the Peer ports, select the Port (Data)—Real Time Performance Statistics template, select the Peer Motion destination system name, remove all nine existing objects, and click Add objects. Then click Select objects, make your selection for the Y Axis variable and Type. Select both Peer ports for plotting an individual or combined series for each object and click Add. Finish by clicking the Create button.

A new report is started automatically. Unless modified during report creation, data points are added to the graph every five seconds.

Postmigration The volumes migrated off the source 3PAR system in online or MDM mode are unpresented from the host when the migration starts. During the actual migration, they receive write updates by the application over the destination 3PAR and Peer links. After the migration, the volume is removed from the source array if the default behavior in the SSMC was untouched. If the default was changed to keep the volume on the source system, no more updates occur, resulting in stale volumes. No SCSI reservation remains on these migrated volumes. In the case of single-volume migrations, the Standby ALUA state of the target port for the virtual LUN (VLUN) prevents access by any host after the migration. To re-enable presentation of the volume to a host, for example for tape backup, you must restore the ALUA state to Active by using the setvv -clralua command of the HPE 3PAR CLI. Do this carefully because the volume information is no longer current after the migration. Do not export the migrated volume to its original host because a volume with the same name and more recent content is residing on the destination 3PAR.

No warranty is provided for application data consistency on the source volume when data mirroring stops after the migration. Having the migrating volumes in a consistency group does not resolve this. Most applications can recover all or nearly all data upon restart from the migrated volumes on the source array, if this is desired for rollback.

Volumes that were migrated by Peer Motion from a 3PAR or third-party source system are 100% standard on the destination system and can be the subject of a snapshot, clone, Remote Copy, Dynamic and Adaptive Optimization, Peer Motion, and more.

Important Migrated volumes retain the WWN they had on the source system. This is also the case for third-party migration sources. To use HPE 3PAR Recovery Manager products or the Microsoft Volume Shadow Copy Service (VSS) framework for snapshots, the WWN of the migrated volumes needs to be changed to a native one for the destination HPE 3PAR. Changing this with the setvv –wwn command is disruptive for I/O to the volume. It can be part of the list of postmigration tasks or scheduled for the next planned maintenance downtime.

The following metadata are not migrated with Peer Motion to the destination:

• Priority Optimization rules for Virtual Volume Set, Domain, and System target types

• Adaptive Optimization profiles

• Adaptive Flash Cache membership

These rules, profiles, and memberships must be re-created by hand on the destination after the migration.

Page 35: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 35

Additional considerations A migration can be discontinued at the end of the admit phase for online and minimally disruptive migrations. At this time, the SSMC emits the message to rescan the host. Highlight the migration name on the Peer Motions page and from the Actions menu, click Delete to remove it. Some clean-up work might be needed such as removing the Peer volumes on the destination 3PAR system and unpresenting the volumes from the Peer host on the source 3PAR.

After an import task starts, it cannot be stopped. In case the wrong volume is migrated, let the transfer complete in the federation. The rollback to the original federation configuration requires no more than starting a reverse migration; no SAN changes need to be executed. Follow the rollback instructions from the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide in the Reference section.

The Peer Motion subsystem in the SSMC is implemented as a stateless engine controlling the migration by interfacing to the source and the destination storage systems. It can be closed during the admit or the import phase without repercussions. During the import, SSMC merely acts as a reporting tool indicating the progress of the different import tasks.

Logging Migrations orchestrated by SSMC are executed inside the embedded version of Peer Motion. The default log directory location for Peer Motion operations is standard for SSMC, located at <Install drive/folder>\Hewlett Packard Enterprise\SSMC\ssmcbase\data\logs for Windows and /var/opt/hpe/ssmc/data/logs for Linux®. This directory maintains several separate logs files for different types of activities. SSMC 3.3 introduces the capability to forward 3PAR system log data to up to three remote syslog servers. This feature is configured in the System Parameters section in the Settings pane for a storage system.

The SSMC Activity log shows one line of information for each ongoing and past activity. Each entry can be expanded into two levels of detail. Filtering the Activity log for Peer Motion migration entries is possible using the options outlined in red in Figure 17. Filter to more granular levels by adding a storage system name outlined in green or by reducing the scope of SSMC in the blue rectangle.

Figure 17. Filtering the Activity log for Peer Motion entries

Detailed information on past SSMC migrations can be filtered out of the 3PAR event log by executing the HPE 3PAR CLI command showeventlog –min <minutes> -msg import on the destination system. The -min parameter, expressed in minutes, indicates how long to go back in time to find matching events. The option -startt and -endt define a time slot for the event log analysis. The showeventlog –min <minutes> -msg <VV name> command shows detailed information about migration steps of a volume. Other detailed information can be obtained by issuing the showtask –d <task number> command, using the task number obtained in the showeventlog output.

Licensing HPE updated its licensing policy for HPE 3PAR StoreServ 8000, 9000, and 20000 systems. Capabilities such as snapshots, Thin Provisioning, File Persona, Priority Optimization, Adaptive Flash Cache, Smart SAN, Online Import, and others are included as part of the All-inclusive Single System Software bundle. This collection of software ships with each array and is ready to be deployed. Capabilities involving multiple arrays such as Remote Copy, Peer Persistence, HPE 3PAR Cluster Extension, and Peer Motion and Storage Federation are grouped into the All-inclusive Multi-System Software bundle. Customers of HPE 3PAR StoreServ 8000 and 20000 systems with spindle-based licensing can be converted for a small fee to the All-inclusive licensing model. The Peer Motion license for HPE 3PAR 7000 and 10000 systems is spindle based and can be acquired as an individual software title or as part of the Data Optimization Software Suite v2.

The All-inclusive and spindle-based Peer Motion licenses are array-level based, perpetual, and without cap. They have no restrictions on the amount of data moved, the number of times a migration can be executed between federation members, or the number of imports started from a migration source toward a federation member.

Page 36: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 36

HPE 3PAR Storage Federation mandates a valid Peer Motion license on all federation members. The free one-year license for Online Import on 3PAR with HPE 3PAR OS 3.2.2 cannot be used for Peer Motion migration and federation. Migration sources do not require a Peer Motion license to transfer data to a federation member.

Delivery model HPE designed Storage Federation and Peer Motion to be easy to use. As a result, the preparation and SAN zoning, the actual migration and the cleanup can be executed by customers. HPE Pointnext can assist with the setup and management of storage federations, migration sources, and migrations. Setting up an HPE 3PAR Storage Federation can be part of a prepackaged or customized data migration service. This service offers expertise, best practices, and automation to deliver a successful end-to-end migration solution. For more information about HPE data migration services, consult your HPE representative or HPE partner. Refer to the HPE 3PAR StoreServ Data Migration Service data sheet for additional details.

Troubleshooting Storage federation migrations can fail because of internal or external issues. The SSMC displays the reason for the admit or import failure in the left pane of the Peer Motions screen. External reasons can be degraded federation members on the hardware level, the presence of a volume with the same name as the migrating one on the destination system, a missing host on the destination 3PAR, or a problem with the SAN or Peer link connectivity. If the reason is external, the SSMC offers to retry the admit or import task when the root cause is fixed. If the SSMC does not offer the retry option, recovery is outside its scope.

No specific tool for diagnosing federation issues exists; the log files generated for standard SSMC debugging contain the necessary information. See the Logging section in this white paper for the location of the log files. The 3PAR event log complements the SSMC logs with information from the destination array’s perspective.

The source 3PAR handles most admit phase work. The destination 3PAR orchestrates the import phase. You need to visit the appropriate system depending on when the problem happens. During the actual migration, the SSMC just monitors the transfer progress.

The HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide contains an extensive chapter on how to troubleshoot the various problems that can arise during the preparation and execution of a migration. Consult this guide for more information.

Collecting information before contacting support When contacting HPE Support for a Storage Federation issue, you should proactively gather information and attach it to your request for help. This section outlines the steps for collecting relevant information.

The following steps assume the SSMC 3.3 installation directory is located at C:\Program Files\Hewlett Packard Enterprise\SSMC. If SSMC was installed in a different directory, change the path in the steps accordingly.

To collect Storage Federation information to submit with a report request:

1. On each 3PAR system in the federation, gather the output of the following 3PAR CLI commands:

a. showsys -d

b. shownode -d

c. showversion –a -b

d. showport –peer

e. showpeer

f. showportdev all n:s:p (with n:s:p as the location of each Peer port)

g. showtarget -rescan followed by showtarget

h. showtarget -lun all

Page 37: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper Page 37

i. showeventlog –debug –oneline –min X (with X indicating the number of minutes to look back for information in the event log). Make sure the value of X covers the entire migration duration.

Merge all output generated from these commands into one file and compress it.

2. On the server executing the SSMC instance:

a. On Windows, execute the batch file support.bat located in <Install drive/folder>\Hewlett Packard Enterprise\SSMC as Administrator. The output of the batch file is written to the compressed file support.<date_time>.zip residing in C:\ProgramData\Hewlett Packard Enterprise\SSMC.

b. On Linux, execute support.sh located at /opt/hpe/ssmc. This generates a compressed file in the same directory.

3. Forward the files obtained in steps 1 and 2 to HPE Support for analysis upon request.

Page 38: HPE 3PAR Storage Federation · Concepts An HPE 3PAR Storage Federation is a software -defined logical construct that groups multiple 3PAR systems for concurrent, nondisruptive, ...

Technical white paper

Share now

Get updates

© Copyright 2016-2019 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and other countries. VMware is a registered trademark of VMware, Inc. and its subsidiaries in the United States and other jurisdictions. All other third-party trademarks are the property of their respective owners.

4AA6-4335ENW, July 2019, Rev. 6

Resources HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04783896

SAN Design Reference Guide support.hpe.com/hpsc/doc/public/display?docId=c00403562

HPE SPOCK h20272.www2.hpe.com/spock/

Learn more at HPE 3PAR StoreServ Storage hpe.com/storage/3PAR