Granite Core and Edge Deployment Guide

66
Granite™ Core and Edge Deployment Guide Version 2.5 August 2013

Transcript of Granite Core and Edge Deployment Guide

Page 1: Granite Core and Edge Deployment Guide

Granite™ Core and Edge Deployment Guide

Version 2.5

August 2013

Page 2: Granite Core and Edge Deployment Guide

© 2013 Riverbed Technology. All rights reserved.

Riverbed®, Cloud Steelhead®, Granite™, Interceptor®, RiOS®, Steelhead®, Think Fast®, Virtual Steelhead®, Whitewater®, Mazu®, Cascade®, Shark®, AirPcap®, BlockStream™, SkipWare®, TurboCap®, WinPcap®, Wireshark®, TrafficScript®, FlyScript™, WWOS™, and Stingray™ are trademarks or registered trademarks of Riverbed Technology, Inc. in the United States and other countries. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed Technology. All other trademarks used herein belong to their respective owners. The trademarks and logos displayed herein cannot be used without the prior written consent of Riverbed Technology or their respective owners.

Akamai® and the Akamai wave logo are registered trademarks of Akamai Technologies, Inc. SureRoute is a service mark of Akamai. Apple and Mac are registered trademarks of Apple, Incorporated in the United States and in other countries. Cisco is a registered trademark of Cisco Systems, Inc. and its affiliates in the United States and in other countries. EMC, Symmetrix, and SRDF are registered trademarks of EMC Corporation and its affiliates in the United States and in other countries. IBM, iSeries, and AS/400 are registered trademarks of IBM Corporation and its affiliates in the United States and in other countries. Linux is a trademark of Linus Torvalds in the United States and in other countries. Microsoft, Windows, Vista, Outlook, and Internet Explorer are trademarks or registered trademarks of Microsoft Corporation in the United States and in other countries. Oracle and JInitiator are trademarks or registered trademarks of Oracle Corporation in the United States and in other countries. UNIX is a registered trademark in the United States and in other countries, exclusively licensed through X/Open Company, Ltd. VMware, ESX, ESXi are trademarks or registered trademarks of VMware, Incorporated in the United States and in other countries.

This product includes software developed by the University of California, Berkeley (and its contributors), EMC, and Comtech AHA Corporation. This product is derived from the RSA Data Security, Inc. MD5 Message-Digest Algorithm.

NetApp Manageability Software Development Kit (NM SDK), including any third-party software available for review with such SDK which can be found at http://communities.netapp.com/docs/DOC-1152, and are included in a NOTICES file included within the downloaded files.

For a list of open source software (including libraries) used in the development of this software along with associated copyright and license agreements, see the Riverbed Support site at https//support.riverbed.com.

This documentation is furnished “AS IS” and is subject to change without notice and should not be construed as a commitment by Riverbed Technology. This documentation may not be copied, modified or distributed without the express authorization of Riverbed Technology and may be used only in connection with Riverbed products and services. Use, duplication, reproduction, release, modification, disclosure or transfer of this documentation is restricted in accordance with the Federal Acquisition Regulations as applied to civilian agencies and the Defense Federal Acquisition Regulation Supplement as applied to military agencies. This documentation qualifies as “commercial computer software documentation” and any use by the government shall be governed solely by these terms. All other use is prohibited. Riverbed Technology assumes no responsibility or liability for any errors or inaccuracies that may appear in this documentation.

Riverbed Technology

199 Fremont Street

San Francisco, CA 94105

Fax: 415.247.8801

Web: http://www.riverbed.com

Phone: 415.247.8800

Part Number

712-00079-03

Page 3: Granite Core and Edge Deployment Guide

Contents

Preface.........................................................................................................................................................1

About This Guide ..........................................................................................................................................1Audience ..................................................................................................................................................2Document Conventions .........................................................................................................................2

Additional Resources ....................................................................................................................................3Release Notes ..........................................................................................................................................3Riverbed Documentation and Support Knowledge Base.................................................................3

Contacting Riverbed......................................................................................................................................3Internet .....................................................................................................................................................3Technical Support ...................................................................................................................................3Professional Services ..............................................................................................................................4Documentation........................................................................................................................................4

Chapter 1 - Overview of Granite Core and Granite Edge as a System ..................................................5

Introducing Virtual Branch Storage ............................................................................................................5

How the Granite Product Family Works....................................................................................................6

Deployment Recommendations and Best Practices .................................................................................7Granite Edge Best Practices...................................................................................................................7Granite Core Best Practices ...................................................................................................................8

Requirements..................................................................................................................................................8Deployment Sizing .................................................................................................................................8Granite Core Hardware Requirements..............................................................................................10Granite Core-VE Hardware Requirements .......................................................................................10

iSCSI-Compliant Storage Array Interoperability ....................................................................................10

Chapter 2 - Deploying Granite Core and Granite Edge as a System ...................................................13

System Architecture and Components .....................................................................................................13

The Granite Family Deployment Process.................................................................................................14Provisioning LUNs on the Storage Array .........................................................................................15Installing and Configuring the Granite Core Appliance ................................................................15LUN Pinning and Prepopulation in the Granite Core Appliance .................................................16

Granite Core and Edge Deployment Guide iii

Page 4: Granite Core and Edge Deployment Guide

Contents

Configuring Snapshot and Data Protection Functionality .............................................................16Installing and Configuring Granite Edge .........................................................................................17Configuring Granite Edge HA and MPIO Functionality ...............................................................17Managing LUN VMs as ESX Datastores ...........................................................................................17

Network Scenarios for Granite Core.........................................................................................................18Single-Appliance Deployment ...........................................................................................................18High-Availability Deployment ...........................................................................................................18

Connecting Granite Core with Granite Edge...........................................................................................19Prerequisites ..........................................................................................................................................19Process Overview: Connecting the Granite Product Family Components ..................................20Adding Granite Edges to the Granite Core Configuration ............................................................21Configuring Granite Edge...................................................................................................................21Mapping LUNs to Granite Edges.......................................................................................................22

Chapter 3 - Deploying the Granite Core Appliance ...............................................................................25

Prerequisites .................................................................................................................................................25

Interface and Port Configuration ..............................................................................................................26Granite Core Ports ................................................................................................................................26Configuring Interface Routing ...........................................................................................................27Configuring Granite Core for Jumbo Frames...................................................................................31

Configuring HA in Granite Core...............................................................................................................31Cabling for Clustered Granite Cores .................................................................................................32Accessing Failover Peers from Either Granite Core ........................................................................32Configuring Failover Peers .................................................................................................................33Removing Granite Core Appliances from an HA Configuration..................................................33

Configuring the iSCSI Initiator ..................................................................................................................34

Configuring LUNs.......................................................................................................................................34Exposing LUNs .....................................................................................................................................35Configuring Fibre Channel LUNs......................................................................................................35Removing a LUN from a Granite Core Configuration....................................................................36

Configuring Redundant Connectivity with MPIO .................................................................................37MPIO in Granite Core ..........................................................................................................................37Configuring Granite Core MPIO Interfaces......................................................................................38

Chapter 4 - Configuring the Granite Edge..............................................................................................39

Granite Edge Storage Specifications .........................................................................................................39

Configuring Disk Management .................................................................................................................40

Configuring Granite Storage......................................................................................................................40

Configuring HA for Granite Edge.............................................................................................................40HA for Granite Edge ............................................................................................................................41Viewing HA Status ...............................................................................................................................41MPIO Recommendations ....................................................................................................................41

MPIO in Granite Edge.................................................................................................................................42

iv Granite Core and Edge Deployment Guide

Page 5: Granite Core and Edge Deployment Guide

Contents

Chapter 5 - Snapshots and Data Protection...........................................................................................43

Qualified Storage Vendors..........................................................................................................................43

Setting Up Application-Consistent Snapshots ........................................................................................44

Configuring Snapshots for LUNs..............................................................................................................45

Volume Snapshot Service (VSS) Support .................................................................................................45

Implementing Riverbed Host Tools for Snapshot Support ..................................................................46RHSP and VSS: An Overview.............................................................................................................46Riverbed Host Tools Operation and Configuration ........................................................................47

Configuring the Proxy Host for Backup...................................................................................................47

Configuring the Storage Array for Proxy Backup ..................................................................................48

Data Protection.............................................................................................................................................49

Data Recovery ..............................................................................................................................................50

Appendix A - Granite Edge Network Reference Architecture ..............................................................51

Multiple VLAN Branch With 4-Port Data NIC .......................................................................................52

Single VLAN Branch With 4-Port Data NIC............................................................................................54

Multiple VLAN Branch Without 4-Port Data NIC..................................................................................56

Index ..........................................................................................................................................................59

Granite Core and Edge Deployment Guide v

Page 6: Granite Core and Edge Deployment Guide

Contents

vi Granite Core and Edge Deployment Guide

Page 7: Granite Core and Edge Deployment Guide

Preface

Welcome to the Granite Core and Edge Deployment Guide. Read this preface for an overview of the information provided in this guide and the documentation conventions used throughout, hardware and software dependencies, additional reading, and contact information. This preface includes the following sections:

“About This Guide” on page 1

“Additional Resources” on page 3

“Contacting Riverbed” on page 3

About This Guide

The Granite Core and Edge Deployment Guide provides an overview of the Riverbed® Granite™ Core and Riverbed® Granite™ Edge appliances and shows you how to install and configure them as a system.

This guide includes information relevant to the following products:

Riverbed Granite Core appliance (Granite Core)

Riverbed Granite Core Virtual Edition (Granite Core-VE)

Riverbed Granite Edge appliance (Granite Edge)

Riverbed Optimization System (RiOS)

Riverbed Steelhead EX appliance (Steelhead EX)

Riverbed Steelhead Steelhead appliance (Steelhead appliance)

This guide is intended to be used together with the following documentation:

Granite Core Management Console User’s Guide

Granite Core Installation and Configuration Guide

Granite Core Getting Started Guide

Granite Data Protection and Recovery Guide

Steelhead Appliance Management Console User’s Guide (for Granite Edge configuration)

Riverbed Command-Line Interface Reference Manual

Fibre Channel on Granite Core Virtual Edition Solution Guide

Granite Core and Edge Deployment Guide 1

Page 8: Granite Core and Edge Deployment Guide

Preface About This Guide

Branch Office Windows File Server with Steelhead EX + Granite Solution Guide

Network Interface Card Installation Guide

Audience

This guide is written for storage and network administrators familiar with administering and managing storage arrays, snapshots, backups, and VMs, as well as Fibre Channel and iSCSI.

This guide requires you to be familiar with virtualization technology, the Granite Core Management Console User’s Guide, the Riverbed Command-Line Interface Reference Manual, and the Steelhead Appliance Management Console User’s Guide.

Document Conventions

This guide uses the following standard set of typographical conventions.

Convention Meaning

italics Within text, new terms and emphasized words appear in italic typeface.

boldface Within text, CLI commands and GUI controls appear in bold typeface.

Courier Code examples appear in Courier font:

amnesiac > enableamnesiac # configure terminal

< > Values that you specify appear in angle brackets:

interface <ipaddress>

[ ] Optional keywords or variables appear in brackets:

ntp peer <addr> [version <number>]

{ } Required keywords or variables appear in braces:

{delete <filename> | upload <filename>}

| The pipe symbol represents a choice between the keyword or variable to the left or right of the symbol (the keyword or variable can be either optional or required):

{delete <filename> | upload <filename>}

2 Granite Core and Edge Deployment Guide

Page 9: Granite Core and Edge Deployment Guide

Additional Resources Preface

Additional Resources

This section describes resources that supplement the information in this guide. It includes the following topics:

“Release Notes” on page 3

“Riverbed Documentation and Support Knowledge Base” on page 3

Release Notes

The following online file supplements the information in this guide. It is available on the Riverbed Support site at https://support.riverbed.com.

Examine the online release notes before you begin the installation and configuration process. They contain important information about this release of the Granite Core appliance.

Riverbed Documentation and Support Knowledge Base

For a complete list and the most current version of Riverbed documentation, visit the Riverbed Support Web site located at https://support.riverbed.com.

The Riverbed Knowledge Base is a database of known issues, how-to documents, system requirements, and common error messages. You can browse titles or search for keywords and strings.

To access the Riverbed Knowledge Base, log in to the Riverbed Support site located at https://support.riverbed.com.

Contacting Riverbed

This section describes how to contact departments within Riverbed.

Internet

You can learn about Riverbed products through the company Web site: http://www.riverbed.com.

Technical Support

If you have problems installing, using, or replacing Riverbed products, contact Riverbed Support or your channel partner who provides support. To contact Riverbed Support, open a trouble ticket by calling 1-888-RVBD-TAC (1-888-782-3822) in the United States and Canada or +1 415 247 7381 outside the United States. You can also go to https://support.riverbed.com.

Online File Purpose

<product>_<version_number> <build_number>.pdf

Describes the product release and identifies fixed problems, known problems, and work-arounds. This file also provides documentation information not covered in the guides or that has been modified since publication.

Granite Core and Edge Deployment Guide 3

Page 10: Granite Core and Edge Deployment Guide

Preface Contacting Riverbed

Professional Services

Riverbed has a staff of professionals who can help you with installation, provisioning, network redesign, project management, custom designs, consolidation project design, and custom coded solutions. To contact Riverbed Professional Services, email [email protected] or go to http://www.riverbed.com/us/products/professional_services/.

Documentation

Riverbed continually strives to improve the quality and usability of its documentation and appreciates any suggestions customers might have about its online documentation or printed materials. Send documentation comments to [email protected].

4 Granite Core and Edge Deployment Guide

Page 11: Granite Core and Edge Deployment Guide

CHAPTER 1 Overview of Granite Core and

Granite Edge as a System

This chapter describes the Granite Core and Granite Edge appliances as a virtual storage system. It includes the following sections:

“Introducing Virtual Branch Storage” on page 5

“How the Granite Product Family Works” on page 6

“Requirements” on page 8

“iSCSI-Compliant Storage Array Interoperability” on page 10

Introducing Virtual Branch Storage

Granite Core and Granite Edge enable 100% consolidation of data and applications that delivers LAN performance at the branch office over the WAN. By functioning as a virtual branch storage system, the Granite product family eliminates the need for dedicated storage, including management and related backup resources, at the branch office.

With the Granite product family, storage administrators can extend a data center storage array to a remote location, even over a low-bandwidth link. Granite delivers business agility, enabling you to effectively deliver global storage infrastructure anywhere you need it.

Note: Granite Edge functionality in Steelhead EX requires RiOS v7.0.2 or later.

The Granite product family provides the following functionality:

Innovative block storage optimization ensures that you can centrally manage data storage while keeping that data available to business operations in the branch, even in the event of a WAN outage.

A local authoritative cache ensures LAN-speed reads and fast cold writes at the branch.

Integration with Microsoft Volume Shadow Copy Service enables consistent point-in-time data snapshots and seamless integration with backup applications.

Integration with the snapshot capabilities of the storage array and enables you to configure application-consistent snapshots through the Granite Core Management Console.

Integration with industry-standard Challenge-Handshake Authentication Protocol (CHAP) authenticates users and hosts.

Granite Core and Edge Deployment Guide 5

Page 12: Granite Core and Edge Deployment Guide

Overview of Granite Core and Granite Edge as a System How the Granite Product Family Works

A secure vault protects sensitive information using AES 256-bit encryption.

Solid state disks (SSDs) that guarantee data durability and performance.

An active-active high-availability (HA) deployment option for Granite ensures the availability of storage array Logical Unit Numbers (LUNs) for remote sites.

Customizable reports provide visibility to key utilization, performance, and diagnostic information.

By consolidating all storage at the data center and creating diskless branches, Granite eliminates data sprawl, costly data replication, and the risk of data loss at the branch office.

How the Granite Product Family Works

The Granite product family is designed to enable branch office server systems to efficiently access storage arrays over the WAN. The Granite product family is deployed in conjunction with Steelhead appliances, and is comprised of the following components:

Granite Core - Granite Core is a physical or virtual appliance deployed in the data center alongside Steelhead appliances and the centralized storage array. Granite Core mounts iSCSI LUNs provisioned for the branch offices.

Granite Edge - Granite Edge runs as a module on Steelhead EX or as a standalone appliance. Granite Edge virtually presents as one or more iSCSI targets in the branch.

The branch office server connects to Granite Edge, which implements handlers for the iSCSI protocol. The Granite Edge also connects to the block store, a persistent local cache of storage blocks.

When the branch office server requests blocks, those blocks are served locally from the block store (unless they are not present, in which case Granite Edge retrieves them from the data center LUN). Similarly, newly written blocks are spooled to the local cache, acknowledged by the branch office server, and then asynchronously propagated to the data center. Because each Granite Edge implementation is linked to one or more dedicated LUNs at the data center, the block store is authoritative for both reads and writes, and can tolerate WAN outages without affecting cache coherency.

Blocks are communicated between Granite Edges and Granite Core through an internal protocol. The Granite Core then writes the updates to the data center LUN through the iSCSI or Fibre Channel protocol. (Optionally, you can further optimize traffic between the branch offices and the data center by implementing Steelhead appliances.)

Granite initially populates the block store using the following methods:

Reactive prefetch - The system observes block requests, applies heuristics based on these observations to intelligently predict the blocks most likely to be requested in the near future, and then requests those blocks from the data center LUN in advance.

Policy-based prefetch - Configured policies identify the blocks that are likely to be requested at a given branch office site in advance; the Granite Edge then requests those blocks from the data center LUN in advance.

First request - Blocks are added to the block store when first requested. Because the first request is cold, it is subject to standard WAN latency.

For details on system architecture, see “System Architecture and Components” on page 13.

6 Granite Core and Edge Deployment Guide

Page 13: Granite Core and Edge Deployment Guide

Deployment Recommendations and Best Practices Overview of Granite Core and Granite Edge as a System

Deployment Recommendations and Best Practices

Every deployment of the Granite product family differs due to variations in specific customer needs and types and sizes of IT infrastructure.

The following recommendations and best practices are intended to guide you to achieving optimal performance while reducing configuration and maintenance requirements.

Granite Edge Best Practices

The following table summarizes the recommended best practices for deploying Granite Edge.

Best Practice Description

Segregate Traffic To increase overall security, minimize congestion and latency, and simplify overall configuration, Riverbed recommends that you segregate iSCSI storage traffic from regular LAN traffic.

Pin the LUN and Prepopulate the Block Store

In specific circumstances, Riverbed recommends that you the LUN and prepopulate the block store. Additionally, you should resize the write reserve space accordingly; by default, the Granite Edge has a write reserve space that is 10% of the block store.

Riverbed recommends that you pin the LUN in the following circumstances:

• Unoptimized file systems - Granite natively supports NTFS and VMFS file systems. However, for unoptimized systems such as fat, fat32, ext3, and others, Riverbed recommends that you pin the LUN and prepopulate the block store.

• Database applications - If the LUN contains database applications that use raw disk file formats or proprietary file systems, Riverbed recommends that you pin the LUN and prepopulate the block store.

• WAN outages are likely or common - Ordinary operation of Granite depends on WAN connectivity between the branch office and the data center. If WAN outages are likely or common, Riverbed recommends that you pin the LUN and prepopulate the block store.

Segregate Data onto Multiple LUNs

Riverbed recommends that you separate storage into three LUNs, as follows:

• Operating system - In case of recovery, the operating system LUN can be quickly restored from the Windows installation disk or ESX data store, depending the type of server used in the deployment.

• Production data - The production data LUN is hosted on the Granite Edge, and therefore safely backed up at the data center.

• Swap space - Data on the swap space LUN is transient and therefore not required in disaster recovery.

Granite Core and Edge Deployment Guide 7

Page 14: Granite Core and Edge Deployment Guide

Overview of Granite Core and Granite Edge as a System Requirements

Granite Core Best Practices

The following table summarizes the recommended best practices for deploying Granite Core.

Requirements

This section describes the hardware and software requirements to deploy Granite Core and Granite Edge. It includes the following topics:

“Deployment Sizing” on page 8

“Granite Core Hardware Requirements” on page 10

“Granite Core-VE Hardware Requirements” on page 10

Deployment Sizing

Accurately sizing typically requires a close discussion between Riverbed representatives and your server, storage, and application administrators.

General considerations include but not are not limited to:

Storage capacity used by branch offices - How much capacity is currently used, or expected to be used, by the branch office. The total capacity might include the amount of used and free space.

IOPS - What are the number and types of drives being used? This value should be determined early to that the Granite-enabled Steelhead appliance can provide the same or higher level of performance.

Daily rate of change - How much data is Granite Edge expected to write back to the storage array through the Granite Core appliance? This value can be determined by studying backup logs.

Outage applications - Which and how many applications are required to continue running during a WAN outage? This might impact disk capacity calculations.

Best Practice Description

Deploy on GigE Networks

The iSCSI protocol enables block-level traffic over IP networks. However, iSCSI is both latency and bandwidth sensitive. To optimize performance reliability, Riverbed recommends that you deploy Granite Core and the storage array on GigE networks.

Configure LUN Masking/Storage Access Control

To ensure data integrity, Riverbed recommends that you block access by other hosts to the LUNs mapped to Granite Core. Use LUN masking, also known as storage access control.

Use Mutual CHAP Riverbed recommends that you use the Mutual CHAP (Challenge Handshake Authentication Protocol) between Granite Core and the storage array for additional security. One-way CHAP is also supported.

Segregate Traffic To increase overall security, minimize congestion and latency, and simplify overall configuration, Riverbed recommends that you segregate storage traffic from regular LAN traffic. Assign storage traffic to its own physically separate network (VLAN) that is routed separately from the main network.

Configure Jumbo Frames

If jumbo frames are supported by your network infrastructure, Riverbed recommends that you use jumbo frames between Granite Core and the storage array. For details, see “Configuring Granite Core for Jumbo Frames” on page 31.

8 Granite Core and Edge Deployment Guide

Page 15: Granite Core and Edge Deployment Guide

Requirements Overview of Granite Core and Granite Edge as a System

Granite Core Sizing Guidelines

The main considerations for sizing your Granite Core deployment are as follows:

Total data set size - The total space used across LUNs (not the size of LUNs)

Total number of LUNs - Each LUN adds five connections to the Steelhead appliance. Also, each branch office represents at least one LUN in the storage array.

RAM requirements - Riverbed recommends that you have at least 700 MB of RAM per TB of used space in the data set.

Other potentially decisive factors:

Number of files and directories

Type of file system, such as NTFS or VMFS

File fragmentation

Active working set of LUNs

Number of misses seen from Granite Edge

Response time of the storage array

Granite Edge Sizing Guidelines

The main considerations for sizing your Granite Edge deployment are as follows:

Disk size - What is the expected capacity of the Granite Edge block store?

Input/output operations per second (IOPS) - You can calculate this value from the number and types of drives.

The following table summarizes sizing recommendations for Granite Core appliances based on the number of branches and data set sizes.

Branches Model Size Data Set Size

5 VGC1000VL n/a 2 TB

10 GC2000L 2U 5 TB

VGC1000L n/a 5 TB

20 GC2000M 2U 10 TB

VGC1000M n/a 10 TB

40 GC2000H 2U 20 TB

80 GC2000VH 2U 35 TB

Granite Core and Edge Deployment Guide 9

Page 16: Granite Core and Edge Deployment Guide

Overview of Granite Core and Granite Edge as a System iSCSI-Compliant Storage Array Interoperability

Granite Core Hardware Requirements

The following table summarizes memory and processing sizing requirements, based on data set size and the number of branches served by the data center in which you deploy Granite Core.

Granite Core-VE Hardware Requirements

The following table summarizes the capabilities of the different Granite Core-VE models, and the number of branch offices to be supported and the size of the data set that each can support.

Figure 1-1. Hardware Requirements

iSCSI-Compliant Storage Array Interoperability

Granite Core can interoperate with any iSCSI-compliant storage array. However, support for snapshots and other features can be limited. For information strictly about qualified storage array vendors and storage snapshot interoperability, see “Qualified Storage Vendors” on page 43.

The following table lists storage systems qualified by the original storage vendor and Riverbed internal engineers, and field-tested by Riverbed customers and partners.

Size Model Maximum RAM Cores Data Set Size Branches

2U GC2000L 4 GB 8 5 TB 10

2U GC2000M 8 GB 8 10 TB 20

2U GC2000H 16 GB 8 20 TB 40

2U GC2000VH 24 GB 8 35 TB 80

Model Memory Disk Space Recommended CPU Data Set Size Branches

VGC1000U 2 GB 25 GB 2 @ 2.2 GHz 2 TB 5

VGC1000L 4 GB 25 GB 4 @ 2.2 GHz 5 TB 10

VGC1000M 8 GB 25 GB 8 @ 2.2 GHz 10 TB 20

Vendor Series/Model Version/Firmware

Snapshot Integration

Internally Qualified

Vendor Qualified

Field Inter-operability

EMC VMAX-10K 5876 No Yes No n/a

CLARiiON CX4-120 04.28.000.5.704 Yes Yes No n/a

CLARiiON CX4-120 04.30.000.5.523 Yes Yes No n/a

VNX 5300 5.32.000.5.008 Yes Yes Yes n/a

VNX 7500 5.32.000.6.004 Yes Yes Yes n/a

VPLEX n/a No Yes No Yes

Isilon n/a No No No Yes

10 Granite Core and Edge Deployment Guide

Page 17: Granite Core and Edge Deployment Guide

iSCSI-Compliant Storage Array Interoperability Overview of Granite Core and Granite Edge as a System

NetApp FAS2020 7.2.4L1 Yes (Ontap SDK 5.0)

Yes No n/a

FAS270 7.3.3 Yes (Ontap SDK 5.0)

Yes No n/a

FAS270 7.2.4 Yes (Ontap SDK 5.0)

Yes No n/a

FAS2050 7.3.7 Yes (Ontap SDK 5.0)

Yes No n/a

Dell EqualLogic PS4000 v70-0120 4.3.0 (R106033) Yes (PSAPI 2.0.0) Yes No n/a

Compellent n/a n/a No No Yes

Hewlett-Packard

LeftHand n/a n/a No No Yes

3PAR n/a n/a No No Yes

Qnap n/a n/a n/a No No Yes

Open Filer n/a n/a n/a No No Yes

IBM V7000 n/a No No Yes Yes

XIV Gen3 v 11.2.0 No No Yes Yes

SVC n/a No No Yes Yes

Vendor Series/Model Version/Firmware

Snapshot Integration

Internally Qualified

Vendor Qualified

Field Inter-operability

Granite Core and Edge Deployment Guide 11

Page 18: Granite Core and Edge Deployment Guide

Overview of Granite Core and Granite Edge as a System iSCSI-Compliant Storage Array Interoperability

12 Granite Core and Edge Deployment Guide

Page 19: Granite Core and Edge Deployment Guide

CHAPTER 2 Deploying Granite Core and Granite

Edge as a System

This chapter describes the process and procedures for deploying the Granite product family at both branch offices and the data center. It includes the following sections:

“System Architecture and Components” on page 13

“The Granite Family Deployment Process” on page 14

“Network Scenarios for Granite Core” on page 18

“Connecting Granite Core with Granite Edge” on page 19

System Architecture and Components

This section describes the system components and their roles.

At the data center, Granite Core integrates with existing storage systems and Steelhead deployments. Granite Core connects dedicated LUNs with each Granite Edge at the branch office.

Each Granite Edge contains a block store that mirrors the data center LUN. As a result, the block store eliminates the need for separate block storage facilities at the branch office, and all the associated maintenance, tools, backup services, hardware, service resources, and so on.

The following diagram shows a generic Granite product family deployment.

Figure 2-1. Generic Granite product family Deployment

Granite Core and Edge Deployment Guide 13

Page 20: Granite Core and Edge Deployment Guide

Deploying Granite Core and Granite Edge as a System The Granite Family Deployment Process

The basic components of a Granite product family deployment are:

Branch Network Server - The branch-side server that accesses data from the Granite Edge instead of a local storage device.

Block Store - A persistent local cache of storage blocks. Because each Granite Edge is linked to a dedicated LUN at the data center, the block store is generally authoritative for both reads and writes.

In Figure 2-1, the block store on the branch side synchronizes with LUN1 at the data center.

iSCSI Initiator - The branch-side client that sends SCSI commands to the iSCSI target at the data center.

Granite-Enabled Steelhead EX Appliance (Granite Edge) - The Granite Edge is the branch-side component that links the branch office server to the block store and links the block store to the iSCSI target and LUN at the data center. The Steelhead appliance provides general optimization services. Granite Edge can be either a standalone appliance or a Steelhead EX appliance with a Granite Edge license.

Data Center Steelhead Appliance - The data center-side Steelhead peer for general optimization.

Granite Core - The data center component of the Granite product family. Granite Core manages block transfers between the LUN and the Granite Edge.

iSCSI Target - The data center-side storage array that communicates with the branch-side iSCSI initiator. For details on storage arrays, see “iSCSI-Compliant Storage Array Interoperability” on page 10.

LUNs - Each Granite Edge requires one or more dedicated LUNs in the data center storage configuration.

The Granite Family Deployment Process

This section provides a broad outline of the process for deploying the Granite product family. The steps are listed in approximate order; dependencies are listed when required.

The steps are as follows:

“Provisioning LUNs on the Storage Array” on page 15

“Installing and Configuring the Granite Core Appliance” on page 15

“LUN Pinning and Prepopulation in the Granite Core Appliance” on page 16

“Configuring Snapshot and Data Protection Functionality” on page 16

“Installing and Configuring Granite Edge” on page 17

“Configuring Granite Edge HA and MPIO Functionality” on page 17

“Managing LUN VMs as ESX Datastores” on page 17

14 Granite Core and Edge Deployment Guide

Page 21: Granite Core and Edge Deployment Guide

The Granite Family Deployment Process Deploying Granite Core and Granite Edge as a System

Provisioning LUNs on the Storage Array

This section describes how to provision LUNs on the storage array.

To provision LUNs on the storage array

1. Enable the connections for the type of LUNs you intend to expose to the branch, for example, iSCSI and Fibre Channel.

2. Create empty LUNs that you want to dedicate to specific branches.

3. By connecting to a temporary ESX server, deploy virtual machines (VMs) for branch services (including the branch Windows server) to the LUNs.

Riverbed recommends that you implement the optional Windows Server plugins at this point. For details, see “Implementing Riverbed Host Tools for Snapshot Support” on page 46.

4. After you deploy the VMs, disconnect from the temporary ESX server.

5. Create the necessary initiator or target groups.

For more information, see the documentation for your storage array.

Installing and Configuring the Granite Core Appliance

This section describes how to install and configure Granite Core.

To install and configure Granite Core

1. Install Granite Core or Granite Core-VE in the data center network.

For details, see the Granite Core Installation and Configuration Guide and the Granite Core Getting Started Guide.

2. Connect the Granite Core appliance to the storage array.

For details, see the Granite Core Management Console User’s Guide or the Riverbed Command-Line Interface Reference Manual.

3. Through Granite Core, discover and configure the desired LUNs on the storage array.

For details, see the Granite Core Management Console User’s Guide or the Riverbed Command-Line Interface Reference Manual.

4. (Recommended) Enable and configure HA.

Granite Core enables live failover configuration in which both peers are serving different sets of branch offices, but either can take over for the other in the event of a hardware failure. In a Granite Core-VE deployment, each appliance must run on separate ESX servers.

HA also requires preferred interface configuration. For details, see the Granite Core Management Console User’s Guide or the Riverbed Command-Line Interface Reference Manual.

Granite Core and Edge Deployment Guide 15

Page 22: Granite Core and Edge Deployment Guide

Deploying Granite Core and Granite Edge as a System The Granite Family Deployment Process

5. (Recommended) Enable and configure Multi-Path I/O (MPIO).

If you connect the Granite Core appliance to the storage array with redundant cables, the connections are automatically detected but you must still configure the MPIO interfaces to be used.

The MPIO feature enables you to connect Granite Core to the network and to the storage system through multiple physical I/O paths. Redundant connections help prevent loss of connectivity in the event of an interface, switch, cable, or other physical failure.

You can configure MPIO at two separate and independent points:

iSCSI Initiator - You can enable and configure multiple I/O paths between Granite Core and the storage system. Optionally, you can enable standard routing if the iSCSI portal is not in the same subnet as the MPIO interfaces.

iSCSI Target - You can configure multiple portals on the Granite Edge. With this configure, an initiator can establish multiple I/O paths to the Granite Edge.

In both cases, the redundant connections help prevent loss of connectivity. For details, see the Granite Core Management Console User’s Guide or the Riverbed Command-Line Interface Reference Manual.

LUN Pinning and Prepopulation in the Granite Core Appliance

LUN pinning and prepopulation are two separate features configured through Granite Core which together determine how block data is kept in the block store on the Granite Edge.

When you pin a LUN in the Granite Core configuration, you reserve space in the Granite Edge block store that is equal in size to the LUN at the storage array. Furthermore, when blocks are fetched by the Granite Edge, they remain in the block store in their entirety; by contrast, blocks in unpinned LUNs might be cleared on a first-in-first out basis.

Pinning only reserves block store space; it does not populate that space with blocks.

The prepopulation functionality enables you to prefetch blocks to the block store. You can prepopulate a pinned LUN on the block store in one step; however, if the number of blocks is very large, you can configure a prepopulation schedule that prepopulates the block store only during the scheduled intervals, stopping after the prepopulation process is completed.

For more information about pinning and prepopulation, see the Granite Core Management Console User’s Guide.

Configuring Snapshot and Data Protection Functionality

Granite Core integrates with the snapshot capabilities of the storage array and enables you to configure application-consistent snapshots through the Granite Core Management Console.

For details, see “Snapshots and Data Protection” on page 43.

16 Granite Core and Edge Deployment Guide

Page 23: Granite Core and Edge Deployment Guide

The Granite Family Deployment Process Deploying Granite Core and Granite Edge as a System

Installing and Configuring Granite Edge

This section describes how to install and configure Granite Edge.

To install and configure Granite Edge

1. Install the Steelhead EX appliance in the branch office network.

For details, see the Steelhead EX Installation and Configuration Guide and the Granite Core Getting Started Guide.

2. On the Steelhead EX appliance, configure Disk Management to enable Granite storage mode.

This step can require additional branch services configuration, including for the Virtual Services Platform (VSP). For details, see the Steelhead Appliance Management Console User’s Guide and the Riverbed Command-Line Interface Reference Manual.

3. Preconfigure Granite Edge for connection to Granite Core.

This procedure includes setting the Granite Edge identifier used by Granite Core to recognize and connect to the Granite Edge appliance. For details, see the Steelhead Appliance Management Console User’s Guide and the Riverbed Command-Line Interface Reference Manual.

4. Connect Granite Edge and Granite Core.

In Granite Core, map the LUN (or LUNs) that are dedicated to that specific branch office to the corresponding Granite Edge.

Note: If the Granite Edge identifier and iSCSI initiators and targets have been previously configured, you can perform all Granite Edge and Granite Core connection configuration through the setup wizard accessible through the Granite Core Management Console. For details, see the Steelhead Appliance Management Console User’s Guide.

Configuring Granite Edge HA and MPIO Functionality

In Granite Edge, you enable multiple local interfaces through which the iSCSI initiator can connect to the Granite Edge. Redundant connections help prevent loss of connectivity in the event of an interface, switch, cable, or other physical failure.

For details, see the Steelhead Appliance Management Console User’s Guide.

Managing LUN VMs as ESX Datastores

Through the vSphere client, you can view inside the LUN to see the VMs previously loaded in the data center storage array. These VMs can then be added to the VSP inventory and Granite Edge and run as services through the Steelhead EX appliance.

Similarly, you can use vSphere to provision LUNs with VMs from VSP on the Steelhead EX appliance. For more information, see the Steelhead Appliance Management Console User’s Guide.

Granite Core and Edge Deployment Guide 17

Page 24: Granite Core and Edge Deployment Guide

Deploying Granite Core and Granite Edge as a System Network Scenarios for Granite Core

Network Scenarios for Granite Core

This section describes both single-appliance and HA network scenarios.

Note: In all scenarios, Granite Core is deployed out-of-path.

Single-Appliance Deployment

In a single appliance deployment, Granite Core connects to the storage array through the eth0_1 interface. The primary (PRI) interface is dedicated to the traffic VLAN, and the Auxiliary (AUX) interface is dedicated to the management VLAN.

For details on ports and interfaces, see “The Granite Family Deployment Process” on page 14.

Figure 2-2. Single Appliance Deployment

High-Availability Deployment

In an HA deployment, two Granite Cores operate as failover peers. Both appliances operate independently with their respective and distinct Granite Edges until one fails; then the remaining operational Granite Core handles the traffic for both appliances.

Both Granite Cores connect to the storage array through their respective eth0_1 interfaces, and communicate with each other through the eth0_2 and eth0_3 interfaces. The PRI interfaces are dedicated to the traffic VLAN and the AUX interfaces are dedicated to the management VLAN.

For details on ports and interfaces, see “The Granite Family Deployment Process” on page 14.

18 Granite Core and Edge Deployment Guide

Page 25: Granite Core and Edge Deployment Guide

Connecting Granite Core with Granite Edge Deploying Granite Core and Granite Edge as a System

For details on configuring Granite Core for HA, see “Configuring Redundant Connectivity with MPIO” on page 37.

Figure 2-3. HA Deployment

Connecting Granite Core with Granite Edge

This section describes the prerequisites for configuring the data center and branch office components of the Granite product family, and provides an overview of the procedures required. It includes the following topics:

“Prerequisites” on page 19

“Process Overview: Connecting the Granite Product Family Components” on page 20

“Adding Granite Edges to the Granite Core Configuration” on page 21

“Adding Granite Edges to the Granite Core Configuration” on page 21

“Configuring Granite Edge” on page 21

“Mapping LUNs to Granite Edges” on page 22

Prerequisites

Before you configure Granite Core with Granite Edge, ensure that the following steps have been completed:

Assign an IP address or hostname to the Granite Core.

Determine the iSCSI Qualified Name (IQN) to be used for Granite Core.

When you configure Granite Core, you set this value in the initiator configuration.

Set up your storage array:

– Register the Granite Core IQN.

– Configure iSCSI portal, targets, and LUNs, with the LUNs assigned to the Granite Core IQN.

Assign an IP address or hostname to the Granite Edge.

Granite Core and Edge Deployment Guide 19

Page 26: Granite Core and Edge Deployment Guide

Deploying Granite Core and Granite Edge as a System Connecting Granite Core with Granite Edge

Process Overview: Connecting the Granite Product Family Components

The following table summarizes the process for connecting and configuring Granite Core and Granite Edge as a system.

Component Procedure Description

Granite Core Determine the network settings for Granite Core.

Prior to deployment:

• Assign an IP address or hostname to Granite Core.

• Determine the IQN to be used for Granite Core.

When you configure Granite Core, you set this value in the initiator configuration.

iSCSI-compliant storage array

Register the Granite Core IQN. Granite uses the IQN name format for iSCSI initiators. For details on IQN, see http://tools.ietf.org/html/rfc3720.

Enable Fibre Channel connections.

For details, see “Configuring Fibre Channel LUNs” on page 35.

Prepare the iSCSI portals, targets, and LUNs, with the LUNs assigned to the Granite Core IQN.

Prior to deploying Granite Core, you must prepare these components.

Granite Core Install Granite Core. For details, see the Granite Core Installation and Configuration Guide.

Granite Edge Install the Granite-enabled Steelhead appliance (separate Granite license required).

For details, see the Steelhead EX Installation and Configuration Guide.

Granite Edge Configure disk management. You can configure the disk layout mode to allow space for the Granite block store in the Disk Management page.

Free disk space is divided between the Virtual Services Platform (VSP) and the Granite block store.

For details, see “Configuring Disk Management” on page 40.

Configure Granite storage settings.

The Granite storage settings are used by Granite Core to recognize and connect to Granite Edge.

For details, see “Configuring Granite Storage” on page 40.

Granite Core Run the Setup Wizard to perform initial configuration.

The Setup Wizard performs the initial, minimal configuration of Granite Core, including:

• Network settings

• iSCSI initiator configuration

• Mapping LUNs to Granite Edges

For details, see the Granite Core Installation and Configuration Guide.

20 Granite Core and Edge Deployment Guide

Page 27: Granite Core and Edge Deployment Guide

Connecting Granite Core with Granite Edge Deploying Granite Core and Granite Edge as a System

Adding Granite Edges to the Granite Core Configuration

You can add and modify connectivity with Granite Edge appliances in the Configure > Storage > Granite Edges page in the Granite Core Management Console.

This procedure requires you to provide the Granite Edge Identifier for the Granite Edge appliance. This value is defined in the Configure > Granite > Granite Storage page in the Granite Edge Management Console, or specified through the CLI. For more information, see the Granite Core Management Console User’s Guide or Riverbed Command-Line Interface Reference Manual.

Configuring Granite Edge

For more information about Granite Edge configuration for deployment, see “Configuring the Granite Edge” on page 39.

Granite Core Configure iSCSI initiators and LUNs.

Configure the iSCSI initiator and specify an iSCSI portal. This portal discovers all the targets within that portal.

Add and configure the discovered targets to the iSCSI initiator configuration.

Configure Targets. After a target is added, all the LUNs on that target can be discovered, and you can add them to the running configuration.

Map LUNs to Granite Edges. Using the previously defined Granite Edge self-identifier, connect LUNs to the appropriate Granite Edges.

For details on the above procedures, see the Granite Core Management Console User’s Guide.

Granite Core Configure CHAP users and storage array snapshots.

Optionally, you can configure CHAP users and storage array snapshots.

For details, see the Granite Core Management Console User’s Guide.

Granite Edge Confirm the connection with Granite Core.

After completing the Granite Corede configuration, confirm that Granite Edge is connected to and communicating with Granite Core.

For details, see “Mapping LUNs to Granite Edges” on page 22.

Component Procedure Description

Granite Core and Edge Deployment Guide 21

Page 28: Granite Core and Edge Deployment Guide

Deploying Granite Core and Granite Edge as a System Connecting Granite Core with Granite Edge

Mapping LUNs to Granite Edges

This section describes how to configure LUNs and map them to Granite Edge appliances.

It includes the following topics:

“Configuring iSCSI Settings” on page 22

“Configuring LUNs” on page 22

“Configuring Granite Edges for Specific LUNs” on page 22

Configuring iSCSI Settings

You can view and configure the iSCSI initiator, portals, and targets in the iSCSI Configuration page.

The iSCSI initiator settings configure how Granite Core communicates with one or more storage arrays through the specified portal configuration.

After configuring the iSCSI portal, you can open the portal configuration to configure targets.

For more information and procedures, see the Granite Core Management Console User’s Guide or the Riverbed Command-Line Interface Reference Manual.

Configuring LUNs

You configure Block Disk (Fibre Channel), Edge Local, and iSCSI LUNs in the LUNs page.

Typically, Block Disk and iSCSI LUNs are used to store production data. They share the space in the block store cache of the associated Granite Edges, and the data is continuously replicated and kept synchronized with the associated LUN in the data center. The Granite Edge block store caches only the working set of data blocks for these LUNs; additional data is retrieved from the data center when needed.

Block-disk LUN configuration pertains to Fibre Channel support. Fibre Channel is supported only in Granite Core-VE deployments. For more information, see “Configuring Fibre Channel LUNs” on page 35.

Edge Local LUNs are used to store transient and temporary data. Local LUNs also use dedicated space in the block store cache of the associated Granite Edges, but the data is not replicated back to the data center LUNs.

Configuring Granite Edges for Specific LUNs

After you configure the LUNs and Granite Edges for the Granite Core appliance, you can map the LUNs to the Granite Edge appliances.

You complete this mapping through the Granite Edge configuration in the Granite Core Management Console on the Configure > Storage > Granite Edges page.

22 Granite Core and Edge Deployment Guide

Page 29: Granite Core and Edge Deployment Guide

Connecting Granite Core with Granite Edge Deploying Granite Core and Granite Edge as a System

When you select a specific Granite Edge, the following controls for additional configuration are displayed.

Control Description

Status This panel displays the following information about the selected Granite Edge:

• IP Address - The IP address of the selected Granite Edge.

• Connection Status - Connection status to the selected Granite Edge.

• Connection Duration - Duration of the current connection.

• Total LUN Capacity - The total storage capacity of the LUN dedicated to the selected Granite Edge.

• Blockstore Encryption - Type of encryption selected, if any.

The panel also displays a small scale version of the Granite Edge Data I/O report.

Target Settings This panel displays the following controls for configuring the target settings:

• Target Name - Displays the system name of the selected Granite Edge.

• Require Secured Initiator Authentication - Requires CHAP authorization when the selected Granite Edge is connecting to initiators.

If the Require Secured Initiator Authentication setting is selected, you must set authentication to CHAP in the adjacent Initiator tab.

• Enable Header Digest - Includes the header digest data from the iSCSI protocol data unit (PDU).

• Enable Data Digest - Includes the data digest data from the iSCSI PDU.

• Update Target - Applies any changes you make to the settings ion this panel.

Initiators This panel displays controls for adding and managing initiator configurations:

• Initiator Name - Specify the name of the initiator you are configuring.

• Add to Initiator Group - Select an initiator group from the drop-down list.

• Authentication - Select the authentication method from the drop-down list:

• None - No authentication required.

• CHAP - Only the target authenticates the initiator. The secret is set just for the target; all initiators that want to access that target must use the same secret to begin a session with the target.

• Mutual CHAP - The target and the initiator authenticate each other. A separate secret is set for each target and for each initiator in the storage array.

If Require Secured Initiator Authentication is selected for the Granite Edge in the Target Settings tab, authentication must be configured for a CHAP option.

• Add Initiator - Adds the new initiator to the running configuration.

Initiator Groups This panel displays controls for adding and managing initiator group configurations:

• Group Name - Specifies a name for the group.

• Add Group - Adds the new group. The group name displays in the Initiator Group list.

After this initial configuration, click the new group name in the list to display additional controls:

• Click Add or Remove to control the initiators included in the group.

LUNs This panel displays controls for mapping available LUNs to the selected Granite Edge.

After mapping, the LUN displays in the list in this panel. To manage group and initiator access, click the name of the LUN to access additional controls.

Granite Core and Edge Deployment Guide 23

Page 30: Granite Core and Edge Deployment Guide

Deploying Granite Core and Granite Edge as a System Connecting Granite Core with Granite Edge

Prepopulation This panel displays controls for configuring prepopulation tasks:

Note: This prepopulation schedule is applied to all virtual LUNs mapped to this appliance if you do not configure any LUN-specific schedules.

• Schedule Name - Specify a task name.

• Start Time - Select the start day and time from the respective drop-down lists.

• Stop Time - Select the stop day and time from the respective drop-down list.

• Add Prepopulation Schedule - Adds the task to the Task list.

Note: To delete an existing task, click the trash icon in the Task list.

The LUN must be pinned to enable prepopulation. For more information, see “LUN Pinning and Prepopulation in the Granite Core Appliance” on page 16.

Control Description

24 Granite Core and Edge Deployment Guide

Page 31: Granite Core and Edge Deployment Guide

CHAPTER 3 Deploying the Granite Core

Appliance

This chapter describes the deployment processes specific to Granite Core. It includes the following:

“Prerequisites” on page 25

“Interface and Port Configuration” on page 26

“Configuring HA in Granite Core” on page 31

“Configuring the iSCSI Initiator” on page 34

“Configuring LUNs” on page 34

“Configuring Redundant Connectivity with MPIO” on page 37

Prerequisites

Complete the following before you can configure the Granite Core:

Configure the iSCSI initiators in the storage array using the iSCSI Qualified Name (IQN) format.

Fibre Channel connections to the Granite Core-VE are also supported. For more information, see “Configuring Fibre Channel LUNs” on page 35.

Enable and provision LUNs on the storage array.

For details, see “Provisioning LUNs on the Storage Array” on page 15.

Determine the Granite Edge identifiers so you can establish connections the Granite Core and the corresponding Granite Edge appliances.

For details, see “Installing and Configuring Granite Edge” on page 17.

Install and connect the Granite Core in the data center network.

This includes both appliances if you are using an HA deployment scenario. For more information, see the Granite Core Installation and Configuration Guide and the Granite Core Getting Started Guide.

Granite Core and Edge Deployment Guide 25

Page 32: Granite Core and Edge Deployment Guide

Deploying the Granite Core Appliance Interface and Port Configuration

Interface and Port Configuration

This section describes a typical port configuration. You might require additional routing configuration depending on your deployment scenario.

This section includes the following topics:

“Granite Core Ports” on page 26

“Configuring Interface Routing” on page 27

“Configuring Granite Core for Jumbo Frames” on page 31

Granite Core Ports

The following table summarizes the ports that connect Granite Core to your network.

Figure 3-1. Granite Core Ports

Port Description

Console Connects the serial cable to a terminal device. You establish a serial connection to a terminal emulation program for console access to the Setup Wizard and the Granite Core CLI.

Primary (PRI)

Connects Granite Core to a VLAN switch through which you can connect to the Management Console and the Granite Core CLI. You typically use this port for communication with Granite Edges.

Auxiliary (AUX) Connects Granite Core to the management VLAN.

You can connect a computer directly to the appliance with a crossover cable, enabling you to access the CLI or Management Console.

ETH0_0 through ETH0_3

Connects the ETH0_0, ETH0_1, ETH0_2, and ETH0_3 ports of Granite Core and the LAN switch using a straight-through cable.

Important: If you deploy Granite Core between two switches, all ports must be connected with straight-through cables.

26 Granite Core and Edge Deployment Guide

Page 33: Granite Core and Edge Deployment Guide

Interface and Port Configuration Deploying the Granite Core Appliance

Configuring Interface Routing

You configure interface routing in the Configure > Networking > Management Interfaces page of the Granite Core Management Console.

Note: If all the interfaces have different IP addresses, you do not need additional routes.

This section describes the following scenarios:

“All Interfaces Have Separate Subnet IP Addresses” on page 27

“All Interfaces Are on the Same Subnets” on page 27

“Some Interfaces, Except Primary, Share the Same Subnets” on page 29

“Some Interfaces, Including Primary, Share the Same Subnets” on page 30

All Interfaces Have Separate Subnet IP Addresses

In this scenario, you do not need additional routes.

The following tables shows a sample configuration in which each interface has a separate IP address.

All Interfaces Are on the Same Subnets

If all interfaces are in the same subnet, only the primary interface has a route added by default. You must configure routing for the additional interfaces.

The following table shows a sample configuration.

Interface Sample Configuration Description

Aux 192.168.10.1/24 Management (and default) interface

Primary 192.168.20.1/24 Interface to WAN traffic

eth0_0 10.12.5.12/16 Interface for storage array traffic

eth0_1 192.168.30.1/24 HA heartbeat interface, number 1

eth0_2 192.168.40.1/24 HA heartbeat interface, number 2

eth0_3

Interface Sample Configuration Description

Aux 192.168.10.1/24 Management (and default) interface

Primary 192.168.10.2/24 Interface to WAN traffic

eth0_0 192.168.10.3/24 Interface for storage array traffic

eth0_1 192.168.10.4/24 HA heartbeat interface, number 1

eth0_2 192.168.10.5/24 HA heartbeat interface, number 2

Granite Core and Edge Deployment Guide 27

Page 34: Granite Core and Edge Deployment Guide

Deploying the Granite Core Appliance Interface and Port Configuration

To configure additional routes

1. In the Granite Core Management Console, choose Configure > Networking > Management Interfaces.

Figure 3-2. Routing Table on the Management Interfaces Page

2. Under Main IPv4 Routing Table, use the following controls to configure routing as necessary..

3. Repeat for each interface that requires routing.

4. Click Save to save your changes permanently.

You can also perform this configuration using the ip route CLI command. For details, see the Riverbed Command-Line Interface Reference Manual.

Control Description

Add a New Route Displays the controls for adding a new route.

Destination IPv4 Address Specify the destination IP address for the out-of-path appliance or network management device.

IPv4 Subnet Mask Specify the subnet mask. For example, 255.255.255.0.

Gateway IPv4 Address Optionally, specify the IP address for the gateway.

Interface From the drop-down list, select the interface.

Add Adds the route to the table list.

28 Granite Core and Edge Deployment Guide

Page 35: Granite Core and Edge Deployment Guide

Interface and Port Configuration Deploying the Granite Core Appliance

Some Interfaces, Except Primary, Share the Same Subnets

If a subset of interfaces, excluding primary, are in the same subnet, you must configure additional routes for those interfaces.

The following table shows a sample configuration.

To configure additional routes

1. In the Granite Core Management Console, choose Configure > Networking > Management Interfaces.

2. Under Main IPv4 Routing Table, use the following controls to configure routing as necessary..

3. Repeat for each interface that requires routing.

4. Click Save to save your changes permanently.

You can also perform this configuration using the ip route CLI command. For details, see the Riverbed Command-Line Interface Reference Manual.

Interface Sample Configuration Description

Aux 10.10.10.1/24 Management (and default) interface

Primary 10.10.20.2/24 Interface to WAN traffic

eth0_0 192.168.10.3/24 Interface for storage array traffic

eth0_1 192.168.10.4/24 HA heartbeat interface, number 1

eth0_2 192.168.10.5/24 HA heartbeat interface, number 2

Control Description

Add a New Route Displays the controls for adding a new route.

Destination IPv4 Address Specify the destination IP address for the out-of-path appliance or network management device.

IPv4 Subnet Mask Specify the subnet mask. For example, 255.255.255.0.

Gateway IPv4 Address Optionally, specify the IP address for the gateway.

Interface From the drop-down list, select the interface.

Add Adds the route to the table list.

Granite Core and Edge Deployment Guide 29

Page 36: Granite Core and Edge Deployment Guide

Deploying the Granite Core Appliance Interface and Port Configuration

Some Interfaces, Including Primary, Share the Same Subnets

If some but not all interfaces, including primary, are in the same subnet, you must configure additional routes for those interfaces.

The following table shows a sample configuration.

To configure additional routes

1. In the Granite Core Management Console, choose Configure > Networking > Management Interfaces.

2. Under Main IPv4 Routing Table, use the following controls to configure routing as necessary..

3. Repeat for each interface that requires routing.

4. Click Save to save your changes permanently.

You can also perform this configuration using the ip route CLI command. For details, see the Riverbed Command-Line Interface Reference Manual.

Interface Sample Configuration Description

Aux 10.10.10.1/24 Management (and default) interface

Primary 10.10.20.2/24 Interface to WAN traffic

eth0_0 192.168.10.3/24 Interface for storage array traffic

eth0_1 192.168.10.4/24 HA heartbeat interface, number 1

eth0_2 192.168.10.5/24 HA heartbeat interface, number 2

Control Description

Add a New Route Displays the controls for adding a new route.

Destination IPv4 Address Specify the destination IP address for the out-of-path appliance or network management device.

IPv4 Subnet Mask Specify the subnet mask. For example, 255.255.255.0.

Gateway IPv4 Address Optionally, specify the IP address for the gateway.

Interface From the drop-down list, select the interface.

Add Adds the route to the table list.

30 Granite Core and Edge Deployment Guide

Page 37: Granite Core and Edge Deployment Guide

Configuring HA in Granite Core Deploying the Granite Core Appliance

Configuring Granite Core for Jumbo Frames

If your network infrastructure supports jumbo frames, Riverbed recommends that you configure the connection between the Granite Core and the storage system as described in this section.

In addition to configuring Granite Core for jumbo frames, you must configure the storage system and any switches, routers, or other network devices between Granite Core and the storage system.

To configure Granite Core for jumbo frames

1. From the Granite Core Management Console, choose Configure > Networking > Management Interfaces to open the Management Interfaces page.

2. Under Primary Interface:

– Select the Enable Primary Interface check box.

– Select the Specify IPv4 Address Manually option and enter the correct value for your implementation.

– For the MTU setting, specify 9000 bytes.

3. Click Apply to apply the settings to the current configuration.

4. Click Save to save your changes permanently.

Configuring HA in Granite Core

This section describes how to configure HA between two Granite Cores using the Management Console of either appliance.

When in a failover relationship, both appliances operate independently and actively; neither peer waits in standby mode. When either appliance fails, the failover peer manages the traffic of both appliances.

Note: Granite Core HA configuration is independent of Granite Edge HA configuration.

This section describes the following:

“Cabling for Clustered Granite Cores” on page 32

“Accessing Failover Peers from Either Granite Core” on page 32

“Configuring Failover Peers” on page 33

“Removing Granite Core Appliances from an HA Configuration” on page 33

Granite Core and Edge Deployment Guide 31

Page 38: Granite Core and Edge Deployment Guide

Deploying the Granite Core Appliance Configuring HA in Granite Core

Cabling for Clustered Granite Cores

Figure 3-3 shows the recommended network cabling for two Granite Core appliances configured for HA. The red lines represent crossover cables; blue lines represent straight-through cables.

Figure 3-3. Recommended Cabling for Granite Core HA

Accessing Failover Peers from Either Granite Core

If you configure the current Granite Core for failover with another appliance, all storage configuration and storage report pages include an additional feature that enables you to access and modify settings for both the current appliance and the failover peer.

This feature appears below the page title and includes the text “HA is enabled. You are currently viewing configuration and reports for....” You can then select either Self (the current appliance) or Peer from the drop-down list.

For details on setting up failover, see “Configuring Failover Peers” on page 33.

Figure 3-4 shows a sample Storage configuration page with and without the feature enabled.

Figure 3-4. Failover-Enabled Feature on Sample Storage Pages

32 Granite Core and Edge Deployment Guide

Page 39: Granite Core and Edge Deployment Guide

Configuring HA in Granite Core Deploying the Granite Core Appliance

Configuring Failover Peers

For failover, Riverbed recommends that you connect both failover peers directly with cables through two interfaces. If direct connection is not an option, Riverbed recommends that you configure each failover connection to use a different local interface and to reach its peer IP address through a completely separate route.

To configure failover peers for Granite Core, you need to provide the following information:

The IP address of the peer appliance.

The local interface over which the peers monitor the heartbeat.

A secondary IP address of the peer appliance.

A secondary local interface over which the peers monitor the heartbeat.

In the Granite Core Management Console, you can access controls for configuring HA on the Configure > Failover Configuration page. For more information, see the Granite Core Management Console User’s Guide.

In the Granite Core CLI, you can configure failover using the device-failover peer commands. For more information, see the Riverbed Command-Line Interface Reference Manual.

Removing Granite Core Appliances from an HA Configuration

This section describes the process of removing two Granite Core appliances from an HA relationship using CLI commands.

The two Granite Core appliances are GC01 and GC02.

To remove Granite Core appliances from HA

1. Stop the Granite Core service on the GC02 appliance.

2. On GC01, run the device-failover peer clear command to clear the local failover configuration.

3. Restart the Granite Core service on GC02 and run device-failover peer clear to clear the local failover configuration.

4. On GC02, run device-failover self-config activate to return the appliance to non-failover mode.

Granite Core and Edge Deployment Guide 33

Page 40: Granite Core and Edge Deployment Guide

Deploying the Granite Core Appliance Configuring the iSCSI Initiator

Configuring the iSCSI Initiator

The iSCSI initiator settings dictate how Granite Core communicates with one or more storage arrays through the specified portal configuration.

iSCSI configuration includes:

Initiator name

Enabling header or data digests (optional)

Enabling CHAP authorization (optional)

Enabling MPIO and standard routing for MPIO (optional)

MPIO functionality is described separately in this document. For more information, see “Configuring Redundant Connectivity with MPIO” on page 37.

In the Granite Core Management Console, you can view and configure the iSCSI initiator, local interfaces for MPIO, portals, and targets in the Configure > Storage > iSCSI Configuration page. For more information, see the Granite Core Management Console User’s Guide.

In the Granite Core CLI, use the following commands to access and manage iSCSI initiator settings:

storage lun modify auth-initiator to add or remove an authorized iSCSI initiator to or from the LUN

storage iscsi data-digest to include or exclude the data digest in the iSCSI (PDU)

storage iscsi header-digest to include or exclude the header digest in the iSCSI PDU

storage iscsi initiator to access numerous iSCSI configuration settings

Configuring LUNs

This section includes the following topics:

“Exposing LUNs” on page 35

“Configuring Fibre Channel LUNs” on page 35

“Removing a LUN from a Granite Core Configuration” on page 36

Before you can configure LUNs in Granite Core, you must provision the LUNs on the storage array and configure the iSCSI initiator. For more information, see “Provisioning LUNs on the Storage Array” on page 15 and “Configuring the iSCSI Initiator” on page 34.

34 Granite Core and Edge Deployment Guide

Page 41: Granite Core and Edge Deployment Guide

Configuring LUNs Deploying the Granite Core Appliance

Exposing LUNs

You expose LUNs by scanning for LUNs on the storage array, then mapping them to Granite Edge appliances. After exposing LUNs, you can further configure them for failover, MPIO, snapshots, and pinning and prepopulation.

In the Granite Core Management Console, you can expose and configure LUNs in the Configure > Storage > LUNs page. For more information, see the Granite Core Management Console User’s Guide.

In the Granite Core CLI, you can expose and configure LUNs with the following commands:

storage iscsi portal host rescan-luns to discover available LUNs on the storage array

storage lun add to add a specific LUN

storage lun modify to modify an existing LUN configuration

For more information, see the Riverbed Command-Line Interface Reference Manual.

Configuring Fibre Channel LUNs

The process of configuring Fibre Channel LUNs for Granite Core requires configuration in both the ESXi server and the Granite Core. This section summarizes the general process. For details on LUN configuration, see the Fibre Channel on Granite Core Virtual Edition Solution Guide.

Granite Core-VE appliances use raw device mapping (RDM) in ESXi to mount Fibre Channel LUNs and export them to the Granite Edge. An RDM is a special file in a Virtual machine file system (VMFS) volume that, in this case, acts as a proxy for a Fibre Channel LUN. With the RDM, you can directly allocate an entire Fibre Channel LUN to a VM.

This section describes the following:

“Configuring Fibre Channel LUN Configuration in ESXi” on page 35

“Configuring HA for the LUN” on page 36

“Block Disk LUN Considerations” on page 36

Configuring Fibre Channel LUN Configuration in ESXi

This section describes the general process for configuring Fibre Channel LUNs.

To configure Fibre Channel LUNs

1. Expose the Fibre Channel LUNs to the ESXi server where the Granite Core-VE is deployed.

2. Assign the discovered Fibre Channel LUNs as raw device mappings (RDM).

3. Assign the LUN RDMs to Granite Core.

After you complete these steps, return to the Granite Core Management Console to discover and configure the Fibre Channel LUN as a Fibre Channel LUN. For details, see the Granite Core Management Console User’s Guide.

Granite Core and Edge Deployment Guide 35

Page 42: Granite Core and Edge Deployment Guide

Deploying the Granite Core Appliance Configuring LUNs

Configuring HA for the LUN

This section describes the general process for configuring HA for Fibre Channel LUNs.

To configure HA for the LUN

1. Return to the vSphere Client to configure HA for the LUN.

2. Via the storage array, modify the storage group to which the LUN belongs by exposing it to both ESXi servers running the Granite Core failover peers.

3. Ensure that the LUN is raw-data mapped (RDM) to both Granite Core-VE appliances.

4. Ensure that the SCSI bus number of the RDM LUN is the same on both ESXi servers.

For details on all configuration steps for Fibre Channel LUNs, see the Fibre Channel on Granite Core Virtual Edition Solution Guide and the Granite Core Management Console User’s Guide.

Block Disk LUN Considerations

Fibre Channel LUNs for Fiber Channel are distinct from iSCSI LUNs in several important ways:

No MPIO configuration - There is no MPIO configuration in the Granite Core Management Console for Fibre Channel LUNs. Multipathing is instead performed by the ESXi system.

SCSI reservations - SCSI reservations are not taken on Fibre Channel LUNs.

Additional HA configuration required - Configuring HA for Granite Core-VE failover peers requires that each appliance be deployed on a separate ESXi system. For details, see “Configuring Fibre Channel LUN Configuration in ESXi” on page 35.

Maximum of sixty Fibre Channel LUNs per ESXi system - ESXi allows a maximum of four SCSI controllers, each of which can support up to fifteen SCSI devices.

Removing a LUN from a Granite Core Configuration

This section describes the process to remove a LUN from a Granite Core configuration. This requires actions on both the Granite Core and the Windows server running at the branch.

To remove a LUN

1. At the branch where the LUN is exposed:

Power down the local Windows server.

If the Windows server runs on ESXi, you must also unmount the LUN from ESXi.

2. At the data center, take the LUN offline in the Granite Core configuration.

When you take a LUN offline, outstanding data is flushed to the storage array LUN and the block store cache is cleared.

36 Granite Core and Edge Deployment Guide

Page 43: Granite Core and Edge Deployment Guide

Configuring Redundant Connectivity with MPIO Deploying the Granite Core Appliance

To take a LUN offline:

CLI - Use the storage lun modify offline command. For details, see the Riverbed Command-Line Interface Reference Manual.

Management Console - Choose Configure > Storage > LUNs to open the LUNs page, select the LUN configuration in the list, and select the Details tab. For details, see the Granite Core Management Console User’s Guide.

3. Remove the LUN configuration using one of the following methods:

CLI - Use the storage lun remove command.For details, see the Riverbed Command-Line Interface Reference Manual.

Management Console - Choose Configure > Storage > LUNs to open the LUNs page, locate the LUN configuration in the list, and click the trash icon. For details, see the Granite Core Management Console User’s Guide.

Configuring Redundant Connectivity with MPIO

The MPIO feature enables you to configure multiple physical I/O paths (interfaces) for redundant connectivity with the local network, storage system, and iSCSI initiator.

Both Granite Core and Granite Edge offer MPIO functionality. However, these features are independent of each other and do not affect each other.

MPIO in Granite Core

The MPIO feature enables you to connect Granite Core to the network and to the storage system through multiple physical I/O paths. Redundant connections help prevent loss of connectivity in the event of an interface, switch, cable, or other physical failure.

You can configure MPIO at the following separate and independent points:

iSCSI Initiator - This configuration allows you to enable and configure multiple I/O paths between Granite Core and the storage system. Optionally, you can enable standard routing if the iSCSI portal is not in the same subnet as the MPIO interfaces.

iSCSI Target - This configuration allows you to configure multiple portals on the Granite Edge. Using these portals, an initiator can establish multiple I/O paths to the Granite Edge.

Granite Core and Edge Deployment Guide 37

Page 44: Granite Core and Edge Deployment Guide

Deploying the Granite Core Appliance Configuring Redundant Connectivity with MPIO

Configuring Granite Core MPIO Interfaces

You can configure MPIO interfaces through the Granite Core Management Console or the Granite Core CLI.

In the Granite Core Management Console, choose Configure > Storage > iSCSI Configuration page. Configure MPIO using the following controls:

Enable MPIO.

Enable standard routing for MPIO. This option is required if the backend iSCSI portal is not in the same subnet of at least two of the MPIO interfaces.

Add (or remove) local interfaces for the MPIO connections.

For details on configuring MPIO interfaces in the Granite Core Management Console, see the Granite Core Management Console User’s Guide.

In the Granite Core CLI, open the configuration terminal mode and run the following commands:

storage iscsi session mpio enable to enable the MPIO feature.

storage iscsi session mpio standard-routes enable to enable standard routing for MPIO. This option is required if the backend iSCSI portal is not in the same subnet of at least two of the MPIO interfaces.

storage lun modify mpio path to specify a path.

These commands require additional parameters to identify the LUN. For details on configuring MPIO interfaces in the Granite Core CLI, see the Riverbed Command-Line Interface Reference Manual.

38 Granite Core and Edge Deployment Guide

Page 45: Granite Core and Edge Deployment Guide

CHAPTER 4 Configuring the Granite Edge

This chapter describes the process of configuring Granite Edge at the branch office. It includes the following:

“Granite Edge Storage Specifications” on page 39

“Configuring Disk Management” on page 40

“Configuring Granite Storage” on page 40

“Configuring HA for Granite Edge” on page 40

“MPIO in Granite Edge” on page 42

Granite Edge Storage Specifications

The Granite Edge branch storage features are available only on the Steelhead EX appliance, xx60 model series. You can configure how the free disk space usage on the Steelhead EX appliance is divided between the block store and VSP.

The following table summarizes the possible disk space allocations between VSP and Granite Edge storage on xx60 models.

Numbers in parenthesis represent the maximum amount of space available for the component when the appliance’s disk layout is set to an extended mode.

For details on installing and configuring xx60 model series appliances, see the Steelhead EX Installation and Configuration Guide

Series

Granite-only Allocation(Extended disk layout)

VSP-only Allocation(Extended disk layout)

Mixed Allocation(Extended disk layout)

Granite IOPS

EX560/EX760 380 (380) 380 (600) 190 (300) 85

EX1160 760 (760) 415 (830) 275 (415) 170

EX1260-2 1580 (1580) 860 (1700) 575 (860) 350

EX1260-4 3.6TB (3.6TB) 1.86TB (3.7TB) 1230 (1860) 700

EX1360 10TB 10TB 4TB (5TB) 1700

Granite Core and Edge Deployment Guide 39

Page 46: Granite Core and Edge Deployment Guide

Configuring the Granite Edge Configuring Disk Management

Configuring Disk Management

You can configure the disk layout mode to allow space for the Granite Edge block store in the Configure > System Settings > Disk Management page of the Steelhead EX Management Console. Free disk space is divided between the VSP and the Granite Edge block store.

This page does not allow you to allot disk space; you can only select the desired mode.

Note: You cannot change the disk layout mode unless all VSP slots are currently uninstalled. For details, see the Steelhead Appliance Management Console User’s Guide.

Configuring Granite Storage

On the Granite Edge, you complete the connection to the Granite Core in the Configure > Granite > Granite Storage page by specifying the Granite Core IP address and defining the Granite Edge Identifier (among other settings).

You need the following information to configure Granite Edge storage:

Hostname/IP address of the Granite Core.

Granite Edge Identifier, the value of which is used on Granite Core-side configuration for mapping LUNs to specific Granite Edge appliances. The Granite Core identifier is case-sensitive.

If you configure failover, both appliances must use the same self identifier. In this case, you can use a value that represents the group of appliances.

Port number of Granite Core. The default port is 7970.

The interface for the current Granite Edge to use when connecting with Granite Core.

For details on information about this procedure, see the Steelhead Appliance Management Console User’s Guide.

Configuring HA for Granite Edge

This section describes how to configure HA for Granite Edge. Granite Edge HA enables you to configure two Granite Edge appliances so that either one can fail without disrupting the service of any of the LUNs being provided by the Granite Core.

Granite Edge HA configuration is independent of Granite Core HA configuration.

This section includes the following topics:

“HA for Granite Edge” on page 41

“Viewing HA Status” on page 41

“MPIO Recommendations” on page 41

40 Granite Core and Edge Deployment Guide

Page 47: Granite Core and Edge Deployment Guide

Configuring HA for Granite Edge Configuring the Granite Edge

HA for Granite Edge

To enable HA at the branch office, you configure a pair of Steelhead EX appliances: one as an active peer and the other as a standby peer. The active Steelhead EX in the pair connects to the Core and serves the storage data. The active peer contains the authoritative copy of the block store and configuration data. The standby Steelhead EX is passive and does not service client requests but is ready to take over from the active peer immediately.

As the branch server writes new data to the active peer, it reflects the data to the standby peer, which stores a copy of the data in its local block store. The two appliances maintain a heartbeat protocol between them, so that if the active peer goes down, the standby peer can take over servicing the LUNs. If the standby peer fails, the active peer continues servicing the LUNs, after raising an HA alarm indicating that the system is now in a degraded state.

After a failed peer resumes, it resynchronizes with the other peer in the HA pair to receive any data that was written since the time of the failure. When the peer catches up by receiving all the written data, the system resumes Granite Edge HA, reflects future writes to both peers, and clears the alarm.

For procedures to configure HA for Granite Edge, see the Steelhead Appliance Management Console User’s Guide.

Viewing HA Status

The Configure > Granite > Granite Storage page contains information about the Granite Edge configuration, including the status and details of the HA configuration.

The Granite Storage page displays the status for active peers serving LUNs and standby peers accepting updates from the active peer. Each status is color coded: green indicates a working state such as synchronized and current, red indicates a degraded or critical state such as a peer down, and orange indicates an intermediate or transitory state such as rebuilding the block store.

The status levels are as follows:

Active Sync - The Granite Edge is serving client requests; the standby peer synchronizes with the current state of the active peer.

Active Degraded - The Granite Edge is serving client requests, but the peer appliance is down.

Active Rebuild - The Granite Edge is updating the standby peer with updates that were missed during an outage.

Standby Rebuild - The Granite Edge is passively accepting updates from the active peer, but its block store is not yet current with the state of the active peer.

Standby Sync - The Granite Edge is passively accepting updates from the active peer and is synchronized with the current state of the system.

MPIO Recommendations

When you configure HA for Granite Edge, Riverbed recommends that you configure Granite Edge MPIO. This ensures that a failure of any single component (such as a network interface card, switch, or cable) does not result in a communication problem between the HA pair.

For details on MPIO, see the Steelhead Appliance Management Console User’s Guide for the Steelhead EX appliance.

Granite Core and Edge Deployment Guide 41

Page 48: Granite Core and Edge Deployment Guide

Configuring the Granite Edge MPIO in Granite Edge

MPIO in Granite Edge

In Granite Edge, you enable multiple local interfaces through which the iSCSI initiator can connect to the Granite Core. Redundant connections help prevent loss of connectivity in the event of an interface, switch, cable, or other physical failure.

In the Granite Core Management Console, navigate to the Configure > Granite > Granite Storage page to access controls to add or remove MPIO interfaces. Once specified, the interfaces are available for the iSCSI initiator to connect with the Granite Edge.

For details, see the Steelhead Appliance Management Console User’s Guide for the Steelhead EX appliance.

42 Granite Core and Edge Deployment Guide

Page 49: Granite Core and Edge Deployment Guide

CHAPTER 5 Snapshots and Data Protection

This chapter describes how Granite Core integrates with the snapshot capabilities of the storage array, enabling you to configure application-consistent snapshots through the Granite Core Management Console. It includes the following sections:

“Qualified Storage Vendors” on page 43

“Setting Up Application-Consistent Snapshots” on page 44

“Configuring Snapshots for LUNs” on page 45

“Volume Snapshot Service (VSS) Support” on page 45

“Implementing Riverbed Host Tools for Snapshot Support” on page 46

“Configuring the Proxy Host for Backup” on page 47

“Configuring the Storage Array for Proxy Backup” on page 48

“Data Protection” on page 49

“Data Recovery” on page 50

Qualified Storage Vendors

Granite Core is qualified for storage snapshot interoperability with the following storage vendors:

Vendor Series/Model Version/Firmware

EMC CLARiiON CX4-120 v04.28.00.5.704

v04.30.00.5.523

VNX 5300 v5.32.000.5.008

VNX 7500 v5.32.000.6.004

NetApp FAS2020 v7.2.4L1

FAS270 v7.3.3

v7.2.4

Dell EqualLogic PS4000 v70-0120 v4.3.0 (R106033)

Granite Core and Edge Deployment Guide 43

Page 50: Granite Core and Edge Deployment Guide

Snapshots and Data Protection Setting Up Application-Consistent Snapshots

Setting Up Application-Consistent Snapshots

This section describes the general process for setting up snapshots.

Granite Core integrates with the snapshot capabilities of the storage array. You can configure snapshot settings, schedules, and hosts directly through the Granite Core Management Console or CLI.

To set up snapshots

1. Define the storage array details for the snapshot configuration.

Before you can configure snapshot schedules, application-consistent snapshots, or proxy backup servers for specific LUNs, you must specify for Granite Core the details of the storage array, such as IP address, type, protocol, and so on. The storage driver does not remap any blocks; the remapping takes place within the array.

To access snapshot schedule policy configuration settings:

In the Granite Core Management Console, choose Configure > Storage Snapshots to open the Storage Snapshots page. For details, see the Granite Core Management Console User’s Guide.

In the Granite Core CLI, use the storage snapshot policy modify commands. For details, see the Riverbed Command-Line Interface Reference Manual.

2. Define snapshot schedule policies.

You define snapshot schedules as policies that you can apply later to snapshot configurations for specific LUNs. Once applied, snapshots are automatically taken based on the parameters set by the snapshot schedule policy.

Snapshot schedule policies can specify weekly, daily, or day-specific schedules. Additionally, you can specify the total number of snapshots to retain.

To access snapshot schedule policy configuration settings:

In the Granite Core Management Console, choose Configure > Storage Snapshots to open the Storage Snapshots page. For details, see the Granite Core Management Console User’s Guide.

In the Granite Core CLI, use the storage snapshot policy modify commands. For details, see the Riverbed Command-Line Interface Reference Manual.

3. Define snapshot host credentials.

You define snapshot host settings as storage host credentials that you can apply later to snapshot configurations for specific LUNs.

To access snapshot host credential configuration settings:

In the Granite Core Management Console, choose Configure > Storage Snapshots to open the Storage Snapshots page. For details, see the Granite Core Management Console User’s Guide.

In the Granite Core CLI, use the storage host-info commands. For details, see the Riverbed Command-Line Interface Reference Manual.

44 Granite Core and Edge Deployment Guide

Page 51: Granite Core and Edge Deployment Guide

Configuring Snapshots for LUNs Snapshots and Data Protection

Configuring Snapshots for LUNs

This section describes the general for applying specific snapshot configurations to LUNs through Granite Core. For information about configuring LUNs, see “Configuring LUNs” on page 22.

To apply snapshot configurations to a LUN

1. Select the LUN for the snapshot and access the snapshot settings.

You can access the snapshot settings for a specific LUN in the Configure > Storage > LUNs page. Select the desired LUN to display controls that include the Snapshots tab. The Snapshots tab itself has three tabs: Configuration, Scheduler, and History.

2. Apply a snapshot schedule policy to the current LUN.

The controls in the Scheduler tab enable you to apply a previously configured policy to the current LUN. You can create a new schedule directly in this panel.

3. Specify the storage array where the LUN resides.

The controls in the Configuration tab enable you to specify the storage array where the current LUN resides and to apply a static name that is prepended on the names of snapshots.

4. Specify the LUN protocol type.

The controls in the Configuration tab enable you to specify the LUN protocol type. To configure application-consistent snapshots and a proxy backup, you must set this value to Windows or VMware.

5. Enable and configure application-consistent snapshots.

The controls in the Configuration tab enable you to enable and configure application-consistent snapshots. The settings vary depending on which LUN protocol type is selected.

Volume Snapshot Service (VSS) Support

Riverbed supports VSS through the Riverbed Hardware Snapshot Provider (RHSP) and Riverbed Snapshot Agent. For details, see “Implementing Riverbed Host Tools for Snapshot Support” on page 46.

Granite Core and Edge Deployment Guide 45

Page 52: Granite Core and Edge Deployment Guide

Snapshots and Data Protection Implementing Riverbed Host Tools for Snapshot Support

Implementing Riverbed Host Tools for Snapshot Support

Riverbed Host Tools are installed and implemented separately on the branch office Windows Server. The toolkit provides the following:

RHSP - Functions as a snapshot provider for the Volume Shadow Copy Service (VSS) by exposing Granite Core snapshot capabilities to the Windows Server.

RHSP ensures that users get an application consistent snapshot.

Riverbed Snapshot Agent - A service that enables the Granite Edge appliance to drive snapshots on a schedule. This schedule is set through the Granite Core snapshot configuration. For details, see the Granite Core Management Console User’s Guide.

Riverbed Host Tools support Windows Server 2k8 64-bit and higher.

This section includes the following topics:

“RHSP and VSS: An Overview” on page 46

“Riverbed Host Tools Operation and Configuration” on page 47

RHSP and VSS: An Overview

RHSP exposes Granite Edge through iSCSI to the Windows Server as a snapshot provider. Only use RHSP when an iSCSI LUN is mounted on Windows through the Windows initiator.

The process begins when you (or a backup script) requests a snapshot through the VSS on the Windows Server:

VSS directs all backup-aware applications to flush their I/O operations and to freeze.

VSS directs RHSP to take a snapshot.

RHSP forwards the command to the Granite Edge.

Granite Edge exposes a snapshot to the Windows Server.

Granite Edge and Granite Core commit all pending write operations to the storage array.

The Granite Edge takes the snapshot against the storage array.

Note: The default port through which the Windows Server communicates with Granite Edge is 4000.

46 Granite Core and Edge Deployment Guide

Page 53: Granite Core and Edge Deployment Guide

Configuring the Proxy Host for Backup Snapshots and Data Protection

Riverbed Host Tools Operation and Configuration

Riverbed Host Tools installation and configuration requires configuration in both Windows Server and Granite Core.

To configure Granite Core

1. In the Granite Core Management Console, choose Configure > Storage > Snapshots to configure snapshots.

2. In the Granite Core Management Console, choose Configure > Storage > iSCSI Configuration to configure an iSCSI with the necessary storage array credentials.

The credentials must reflect a user account on the storage array appliance that has permissions to take and expose snapshots.

For details on both steps, see the Granite Core Management Console User’s Guide.

To install and configure Riverbed Host Tools

1. Obtain the installer package (rvbd_host_tools_x64.exe) from Riverbed.

This package is available from the same location as the Granite Core image.

2. Run the installer on your Windows Server.

3. Confirm the installation as follows:

From the Start menu, choose Run...

At the command prompt, enter diskshadow to access the Windows DiskShadow interactive shell.

In the DiskShadow shell, enter list providers.

Confirm that RHSP is among the providers returned.

Configuring the Proxy Host for Backup

This section describes the general procedures for configuring the proxy host for backup in both ESXi and Windows environments.

To configure an ESXi proxy host

Configure the ESXi proxy host to connect to the storage array using iSCSI or Fibre Channel.

Granite Core and Edge Deployment Guide 47

Page 54: Granite Core and Edge Deployment Guide

Snapshots and Data Protection Configuring the Storage Array for Proxy Backup

To configure a Windows proxy host

1. Configure the Windows proxy host to connect to the storage array using iSCSI or Fibre Channel.

2. Configure a local administrator user that has administrator privileges on the Windows proxy host.

To create a user with administrator privileges, create the following registry setting:

– HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

– For Key Type, specify “DWORD”.

– For Key Name, specify “LocalAccountTokenFilterPolicy”.

– Set Key Value to “1”.

3. Disable the Automount feature through the DiskPart command interpreter:

automount disable

4. Add the storage array target to the favorite list on the proxy host to ensure that the iSCSI connection is reestablished when the proxy host is rebooted.

Configuring the Storage Array for Proxy Backup

This section describes the general processes for configuring Dell EqualLogic, EMC CLARiiON, EMC VNX, and NetApp storage arrays for backup.

To configure Dell EqualLogic

1. Go to the LUN and select the Access Tab.

2. Add permissions to Granite Core initiator/IP address and assign “volume-only” level access.

3. Add permissions to proxy host for the LUN and assign “snapshots only” access.

To configure EMC CLARiiON and EMC VNX

1. Create a storage group.

2. Assign the proxy servers to the group.

Riverbed recommends that you provide the storage group information to the proxy host storage group.

To configure NetApp

1. Create an initiator group.

2. Assign the proxy servers to the group.

Riverbed recommends that you provide the storage group information to the proxy host storage group.

48 Granite Core and Edge Deployment Guide

Page 55: Granite Core and Edge Deployment Guide

Data Protection Snapshots and Data Protection

Data Protection

The Granite system provides tools to preserve or enhance your existing data protection strategies. If you are presently using host-based backup agents or host-based consolidated backups at the branch, you can continue to do so within the Granite context.

However, Granite Core also enables a wider range of strategies, including:

Backing up from crash-consistent LUN Snapshot at the data center

The Granite system continuously synchronizes the data created at the branch with the data center LUN. As a result, you can use the storage array at the data center to take snapshots of the LUN and thereby avoid unnecessary data transfers across the WAN. These snapshots can be protected either through the storage array replication software or by mounting the snapshot into a backup server.

Such backups are only crash-consistent because the storage array at the data center does not instruct the applications running on the branch server to quiesce their IOs and flush their buffers before taking the snapshot. As a result, such a snapshot might not contain all the data written by the branch server up to the time of the snapshot.

Backing up from application-Consistent LUN Snapshot at the data center

This option uses the Granite Microsoft VSS integration in conjunction with Granite Core storage array snapshot support. You can trigger VSS snapshots on the iSCSI data drive of your branch Windows servers at the branch and Granite Edge ensures that all data is flushed to the data center LUN and triggers application-consistent snapshots on the data center storage array.

As a result, backups are application-consistent because the Microsoft VSS infrastructure has informed the applications to quiesce their IOs before taking the snapshot.

This option requires the installation of the Riverbed Host Tools on the branch Windows server. For details on Riverbed Host Tools, see “Implementing Riverbed Host Tools for Snapshot Support” on page 46.

Backing up from Granite-assisted consolidated snapshots at the data center

This option relieves backup load on virtual servers, prevents the unnecessary transfer of backup data across the WAN, produces application consistent backups, and backs up multiple virtual servers simultaneously over VMFS or NTFS.

In this option, the ESX server, and subsequently Granite Core, takes the snapshot, which is stored on a separately configured proxy server. The ESX server flushes the virtual machine buffers to the data stores and the Granite Edge appliance flushes the data to the data center LUN, resulting in application-consistent snapshots on the data center storage array.

Note: You must separately configure the proxy server and storage array for backup. For details, see the “Configuring the Proxy Host for Backup” on page 47.

This option does not require the installation of the Riverbed Host Tools on the branch Windows server. However, you must create a script that triggers first ESX-based snapshots and then Granite Edge snapshots.

For details on data protection, backing up strategies, as well as a detailed discussion of crash- and application-consistent snapshots, see the Granite Data Protection and Recovery Guide.

Granite Core and Edge Deployment Guide 49

Page 56: Granite Core and Edge Deployment Guide

Snapshots and Data Protection Data Recovery

Data Recovery

In the event your data protection strategy fails, the Granite system enables several strategies for file-level recovery. The recovery approach depends on the protection strategy you used.

This section describes the following strategies:

File recovery from Granite Edge snapshots at the branch

When snapshots are taken at the branch using Windows VSS in conjunction with RHSP, each snapshot is available to the Windows host as a separate drive. In order to recover a file, browse to the drive associated to the desired snapshot, locate the file, and restore it. For more information about RHSP, see “Implementing Riverbed Host Tools for Snapshot Support” on page 46.

File recovery from the backup application catalog file at the branch

When backups are taken at the branch using a backup application such as Symantec® NetBackup™ or IBM Tivoli® Storage Manager, you access and restore files directly from the backup server. Riverbed recommends that you restore the files to a different location in case you need to resort to the current files.

Recover individual files from a data center snapshot (VMDK files)

To recover individual files from a storage array snapshot of a LUN containing virtual disk (VMDK) files, present the snapshot to a VMware ESX Server, attach the VMDK to an existing VM running the same operating system (or an operating system that reads the file system used inside the VMDKs in question). You can then browse the file system to retrieve the files stored inside the VM.

Recover individual files from a data center snapshot (individual files)

To recover individual files from a storage array snapshot of a LUN containing individual files, present the snapshot to a server running the same operating system (or an operating system that reads the file system used on the LUN). You can then browse the file system to retrieve the files.

File recovery from a backup application at the data center

You can back up snapshots taken at the data center with a backup application or through Network Data Management Protocol (NDMP) dumps. In this case, file recovery remains unchanged from the existing workflow. Use the backup application to restore the file. You can send the file to the branch office location.

Alternatively, you can take the LUN offline from the branch office and inject the file directly into the LUN at the data center. However, Riverbed does not recommend this procedure because it requires taking the entire LUN down for the duration of the procedure.

File recovery from Windows VSS at the branch

You can enable Windows VSS and previous versions can also be enabled at the branch on a Granite LUN no matter which main backup option you implement. When using Windows VSS, you can directly access the drive, navigate to the previous version tab, and recover deleted, damaged or overwritten files.

Windows uses its default VSS software provider to back up the previous sixty-four versions of each file. In addition to restoring individual files to a previous version, VSS also provides the ability to restore an entire volume.

Setting up this recovery strategy requires considerations too numerous to detail here. For more details on recovery strategies, see the Granite Data Protection and Recovery Guide.

50 Granite Core and Edge Deployment Guide

Page 57: Granite Core and Edge Deployment Guide

APPENDIX A Granite Edge Network Reference

Architecture

This appendix provides detailed diagrams for Steelhead EX appliances that run VSP and Granite Edge. It includes the following topics:

“Multiple VLAN Branch With 4-Port Data NIC” on page 52

– “Steelhead EX Setup for Multiple VLAN Branch With 4-Port Data NIC” on page 52

– “Steelhead EX Setup for Multiple VLAN Branch With 4-Port Data NIC” on page 52

– “Steelhead EX Setup for Multiple VLAN Branch With 4-Port Data NIC” on page 52

“Single VLAN Branch With 4-Port Data NIC” on page 54

– “Steelhead EX Setup for Single VLAN Branch With 4-Port Data NIC” on page 54

– “Granite Edge Setup Single VLAN Branch With 4-Port Data NIC” on page 54

– “Virtual Services Platform Setup Single VLAN Branch With 4-Port Data NIC” on page 55

“Multiple VLAN Branch Without 4-Port Data NIC” on page 56

– “Steelhead EX Setup for Multiple VLAN Branch Without 4-Port Data NIC” on page 56

– “Granite Edge Setup Multiple VLAN Branch Without 4-Port Data NIC” on page 56

– “Virtual Services Platform Setup Multiple VLAN Branch Without 4-Port Data NIC” on page 57

Granite Core and Edge Deployment Guide 51

Page 58: Granite Core and Edge Deployment Guide

Granite Edge Network Reference Architecture Multiple VLAN Branch With 4-Port Data NIC

Multiple VLAN Branch With 4-Port Data NIC

The following diagrams apply only to Steelhead EX models 1160/1260.

Figure A-1.Steelhead EX Setup for Multiple VLAN Branch With 4-Port Data NIC

Figure A-2.Granite Edge Setup Multiple VLAN Branch With 4-Port Data NIC

52 Granite Core and Edge Deployment Guide

Page 59: Granite Core and Edge Deployment Guide

Multiple VLAN Branch With 4-Port Data NIC Granite Edge Network Reference Architecture

Figure A-3.Virtual Services Platform Setup Multiple VLAN Branch With 4-Port Data NIC

Granite Core and Edge Deployment Guide 53

Page 60: Granite Core and Edge Deployment Guide

Granite Edge Network Reference Architecture Single VLAN Branch With 4-Port Data NIC

Single VLAN Branch With 4-Port Data NIC

The following network diagrams apply only to Steelhead EX models 1160/1260.

Figure A-4.Steelhead EX Setup for Single VLAN Branch With 4-Port Data NIC

Figure A-5.Granite Edge Setup Single VLAN Branch With 4-Port Data NIC

54 Granite Core and Edge Deployment Guide

Page 61: Granite Core and Edge Deployment Guide

Single VLAN Branch With 4-Port Data NIC Granite Edge Network Reference Architecture

Figure A-6.Virtual Services Platform Setup Single VLAN Branch With 4-Port Data NIC

Granite Core and Edge Deployment Guide 55

Page 62: Granite Core and Edge Deployment Guide

Granite Edge Network Reference Architecture Multiple VLAN Branch Without 4-Port Data NIC

Multiple VLAN Branch Without 4-Port Data NIC

Figure A-7.Steelhead EX Setup for Multiple VLAN Branch Without 4-Port Data NIC

Figure A-8.Granite Edge Setup Multiple VLAN Branch Without 4-Port Data NIC

56 Granite Core and Edge Deployment Guide

Page 63: Granite Core and Edge Deployment Guide

Multiple VLAN Branch Without 4-Port Data NIC Granite Edge Network Reference Architecture

Figure A-9.Virtual Services Platform Setup Multiple VLAN Branch Without 4-Port Data NIC

Granite Core and Edge Deployment Guide 57

Page 64: Granite Core and Edge Deployment Guide

Granite Edge Network Reference Architecture Multiple VLAN Branch Without 4-Port Data NIC

58 Granite Core and Edge Deployment Guide

Page 65: Granite Core and Edge Deployment Guide

Index

AAppliance ports, definitions 26Application-consistent snapshots 44Auxiliary port, definition 26

BBackup proxy host configuration 47Block Disk LUNs

configuring 35considerations 36

Branch servicesconfiguring branch storage 40configuring disk management 40

CCables 26, 32Console port, definition 26

DData protection 49Data recovery 50Deployment process

configuring Granite Edge 22network scenarios 18overview 13, 20

Deployment scenarioshigh availability 18single appliance 18

Disk management, configuring 40Document conventions, overview of 2

FFailover

configuring peers 33Granite Edge 40

Fibre Channel LUNsconfiguring 22, 35considerations 36ESXi configuration 35support 22

GGranite Core

cabling 32configuring high availability 31configuring jumbo frames 31configuring MPIO interfaces 38deploying 25port definitions 26

Granite Core and Edge Deployment Guide

Granite Edgeadding to configuration 37configuring 21confirming connection to 22initiator groups 23initiators, configuring 23LUN mapping 23prepopulating 24target settings 23

HHardware requirements

Granite Core appliance 10Granite Core-VE 10

Heartbeat protocol 41High availability

accessing peers 32cabling for clustered appliances 32configuring 31configuring failover peers 33configuring peers 33overview 18, 37removing 33

IInterface configuration 26Interface routing 27iSCSI

initiator, configuring 22portal, configuring 22

iSCSI initiator 34iSCSI LUNs, configuring 22

JJumbo frames

configuring 31support 31

LLUNs

configuring 22, 34configuring high availability 36configuring snapshots 45discovering 22exposing 35iSCSI 22mapping to Granite Edge 22offlining 36removing 36

59

Page 66: Granite Core and Edge Deployment Guide

Index

removing from configuration 36

MMPIO 38

configuring for Granite Core 37, 38configuring in Granite Edge 42overview 37

Multi-path I/O, configuring 38

NNetwork deployment scenarios 18

OOnline documentation 3

PPort configuration 26Ports

auxiliary 26definitions of 26primary, definition of 26

Product dependencies 3Product overview

high availability deployment 18virtual branch storage 5

Professional services, contacting 4Proxy host

configuration 47

RRelease notes 3Requirements 8Riverbed Hardware Snapshot Provider 46Riverbed Host Tools

operation and configuration 47Riverbed Snapshot Agent 46

Riverbed Snapshot Agent 46

SSnapshots

application consistent 44configuring for LUNs 45Riverbed Hardware Snapshot

Provider 46Riverbed Host Tools 46Volume Snapshot Service (VSS)

support 45Standby Rebuild status 41Standby Sync status 41Storage array

data protection 49data recovery 50proxy backup 48qualified storage vendors 43

Storage array interoperability 10System overview

architecture 13components 13virtual storage 5

System overview, functionality 6

TTarget, configuring 22Technical support

contacting 3contacting Riverbed 3

60

documentation 4professional services 4via Internet 3

VVirtual storage, overview 5Volume Snapshot Service (VSS)

support 45

Index