SAN Design Cisco

137
© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 BRKSAN-1121 Storage Area Networking Core Edge Design Best Practices

Transcript of SAN Design Cisco

Page 1: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

BRKSAN-1121

Storage Area Networking Core Edge Design Best Practices

Page 2: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Session BRKSAN-1121 Abstract

SAN Core-Edge Design Best Practices

This session gives non-storage-networking professionals the fundamentals to understand and implement storage area networks (SANs). This curriculum is intended to prepare attendees for involvement in SAN projects and I/O Consolidation of Ethernet & Fibre Channel networking. You will be exposed to the introduction of Storage Networking terminology & Designs. Specific topics covered include Fibre Channel (FC), FCoE, FC services, FC addressing, fabric routing, zoning, virtual SANs (VSANs). The session includes discussions on Designing Core-Edge Fibre Channel Networks and the best practice recommendations around them. This is an introductory session and attendees are encouraged to follow up with other SAN breakout sessions and labs to learn more about specific advanced topics.

Page 3: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Chad Hintz

Technical Solutions Architect-Data Center/Virtualization

CCIE #15729

Routing & Switching, Security, Storage

Who Am I?

Page 4: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

What Are Storage Area Networks

Page 5: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 6: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 6

History of Storage Area Networks

Page 7: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Direct Attached Storage: DAS

DAS – Direct-Attach Storage

Dedicated High Speed Access   Can’t share capacity

  Difficult to share data

  Difficult to manage

Page 8: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Network Attached Storage: NAS

NAS

NAS – Network-Attached Storage

Centralized Storage Attached over LAN

  File sharing

  More efficient capacity usage

  Performance limits usefulness of NAS – mainly used for file storage and low-end databases

LAN

Page 9: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Storage Area Network: SAN

NAS

SAN – Storage Area Network Dedicated ‘Back End’ Network

  Match DAS performance   Capacity deployed and redeployed   Centralized management   Diskless servers – simplified management,

reduced power and cooling

SAN

LAN

Page 10: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

SAN Protocols

  Fibre Channel Fibre Channel is a gigabit-speed network technology primarily

used for storage networking  FCoE

Fibre Channel over Ethernet (FCoE) is an encapsulation of FibreChannel frames over Ethernet networks.This allows Fibre Channel to use 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol

  iSCSI iSCSI is a TCP/IP-based protocol for establishing and

managing connections between IP-based storage devices, hosts and clients

Page 11: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 12: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 12

Fibre Channel Basics

Page 13: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

SAN Components   Servers with host bus adapters

  Storage systems  RAID  JBOD  Tape

  Switches

  SAN management software

Page 14: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Types of Fibre Channel Switches

Redundancy/Services Cisco MDS 9000

MDS 9506, 9509, 9513 MDS 9200

MDS 91XX

Small/Medium Business Enterprise and Service Provider

FC Bladeswitch

Edge Modular Director

Page 15: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 ‘N’ port: Node ports used to connect devices to switched fabric or point to point configurations.

 ‘F’ port: Fabric ports residing on switches connecting ‘N’ port devices

 ‘L’ port: Loop ports are used in arbitrated loop configurations to build networks without FC switches. These ports often also have ‘N’ port capabilities and are called ‘NL’ ports.

 ‘E’ port: Expansion ports are essentially trunk ports used to connect two Fibre Channel switches

 ‘GL’ port: A generic port capable of operating as either an ‘E’ or ‘F’ port. Its also capable of acting in an ‘FL’ port capacity. Auto Discovery.

Fibre Channel Port Types

N N

N F

NL FL

L L

E E

Page 16: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

G_Port

G_Port

E_Port

F_Port

F_Port

E_Port

N_Port

N_Port

Fibre Channel Port Types

Fibre Channel Switch

NPV Switch F_Port NP_Port Fabric

Switch

TE_Port Fabric Switch TE_Port

End Node

End Node

Page 17: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Start from the Beginning…   Start with the host and a target that need

to communicate   Host has 2 HBA’s (one per fabric)

each with a WWN   Target has multiple ports to connect

to fabric   Connect to a FC Switch

  Port Type Negotiation   Speed Negotiation

  FC Switch is part of the SAN “fabric”   Most commonly, dual fabrics are

deployed for redundancy

FC

HBA

Core

Initiator

Target

FABRIC A

Edge

Page 18: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

My Port Is Up…Can I Talk Now? FLOGIs/PLOGIs

N_Port

F_Port

FC

HBA

Core

Initiator

Target

E_Port

  Step 1: Fabric Login (FLOGI)   determines the presence or absence

of a Fabric   exchanges Service Parameters with

the Fabric   switch identifies the WWN

in the service parameters of the accept frame and assigns a Fibre Channel ID (FCID)

  initializes the buffer-to-buffer credits   Step 2: Port Login (PLOGI)

  required between nodes that want to communicate

  similar to FLOGI – transports a PLOGI frame to the designation node port

  In p2p topology (no fabric present), initializes buffer-to-buffer credits

Edge

Page 19: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Buffer to Buffer Credits Fibre Channel Flow Control

  B2B Credits used to ensure that FC transport is lossless

  # of credits negotiated between ports when link is brought up

  # Credits decremented with each packet placed on the wire   Independent of packet size   If # credits == 0, no more packet

transmission   # of credits incremented with each

“transfer ready” received   B2B Credits need to be taken into

consideration as distance and/or bandwidth increases

*See reference slides for B2B-Distance calculations based on Line cards.

16

16

Packet

15

R_R

DY

Host

Fibre Channel Switch

16

Page 20: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Over-Subscription FAN-OUT Ratio  Over-subscription (or fan-out) ratio for sizing ports and links

 Factors used  Speed of Host HBA interfaces

 Speed of Array interfaces

 Type of server and application

 Storage vendors provide guidance in the process

 Ratios range between 4:1 - 20:1

FC

6 x 4G Array ports 3 x 8G ISL ports

Example: 10:1 O/S ratio

60 Servers with 4 Gb HBAs

240 G 24 G 24 G

Page 21: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Fabric Name and Addressing: WWN

 Every Fibre Channel port and node has a hard-coded address called World Wide Name (WWN)

 During FLOGI the switch identifies the WWN in the service parameters of the accept frame and assigns a Fibre Channel ID (FCID)

 Switch Name Server maps WWNs to FCID  WWNN uniquely identify devices

 WWPN uniquely identify each port in a device

Page 22: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Switch Topology Model

Switch Domain Area Device

Fabric Channel ID (FCID) Fibre Channel Addressing Scheme

FC Fabric

Domain ID   FCID assigned to every WWN corresponding to an N_Port

  FCID made up of switch domain, area and device   domain ID is native to a single FC

switch   limitation of domain IDs in a single

fabric   Forwarding decisions made on domain ID

found in first 8 bits of FCID

Page 23: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 FSPF “routes” traffic based on destination domain ID found in the destination FCID

 For FSPF a domain ID identifies a single switch  The number of Domains IDs are limited to 239/75 (theoretical limited/tested and qualified) within the same fabric (VSAN)

 FSPF performs hop-by-hop routing

 FSPF uses total cost as the metric to determine most efficient path

 FSPF supports equal cost load balancing across links

Fabric Shortest Path First (like OSPF) Fibre Channel Forwarding

Page 24: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 Repository of information regarding the components that make up the Fibre Channel network

 Located at address FF FF FC (some readings call this the name server)

 Components can register their characteristics with the directory server

 An N_Port can query the directory server for specific information  Query can be the address identifier, WWN and volume names for all SCSI targets

Directory Server/Name Server (Like DNS)

Page 25: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Login Complete…Almost There Fabric Zoning

FC

FC Fabric

Initiator

Target

  Zones are the basic form of data path security   zone members can only “see” and

talk to other members of the zone   devices can be members of more

than one zone   Default zoning is “deny”

  Zones belong to a zoneset   Zoneset must be “active” to enforce

zoning   Only one active zoneset per fabric

or per VSAN

zone name EXAMPLE_Zone * fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [tnitiator] * fcid 0x11.00.01 [pwwn 50:06:01:61:3c:e0:1a:f6] [target]

pwwn 10:00:00:00:c9:76:fd:31

pwwn 50:06:01:61:3c:e0:1a:f6

Page 26: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Enhanced vs. Basic Zoning

Basic Zoning Enhanced Zoning Enhanced Advantages

Administrators can make simultaneous configuration changes

All configuration changes are made within a single session. Switch locks entire fabric to implement change

One configuration session for entire fabric to ensure consistency within fabric

If a zone is a member of multiple zonesets , an instance is created per zoneset.

References to the zone are used by the zonesets as required once you define the zone.

Reduced payload size as the zone is referenced. The size is more pronounced with bigger database

Default zone policy is defined per switch.

Enforces and exchanges default zone setting throughout the fabric

Fabric-wide policy enforcement reduces troubleshooting time.

Page 27: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Enhanced vs. Basic Zoning

Basic Zoning Enhanced Zoning Enhanced Advantages

Managing switch provides combined status about activation. Will not identify a failure switch.

Retrieves the activation results and the nature of the problem from each remote switch.

Enhanced error reporting reduces troubleshooting process.

To distribute zoneset must re-activate the same zoneset.

Implements changes to the zoning database and distributes it without activation.

This avoids hardware changes for hard zoning in the switches.

During a merge MDS specific types can be misunderstood by non-cisco switches.

Provides a vendor ID along with a vendor-specific type value to uniquely identify a member type

Unique Vendor type

Page 28: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 Zoning is used to control access in a SAN  Soft zoning

 Enforced by name server query responses  Name server sends membership list to N_Port  N-port accesses members only

 Hard zoning  Enforced by hardware (forwarding ASIC) at wire speed pWWN, fWWN, FC_ID, FC_Alias

Zoning—Enforcement

Zone-1

Array

Zone-2

Array

Host FC

MDS MDS Host FC

Soft Zone Hard Zone

Zone-1

Array

Zone-2

Array

Host FC

MDS MDS Host FC

Page 29: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Hard zone

Page 30: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Soft zone

Page 31: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 Analogous to VLANs in Ethernet  Virtual fabrics created from larger cost-

effective redundant physical fabric  Reduces wasted ports of a SAN island

approach  Fabric events are isolated per VSAN which

gives further isolation for High Availability  Statistics can be gathered per VSAN  Each VSAN provides Separate Fabric

Services  FSPF, Zones/Zoneset, DNS, RSCN

Virtual SANs (VSANs) Virtual Fabric Separation

Physical SAN Islands Are Virtualized onto Common SAN Infrastructure

VSANs supported on MDS and Nexus 5000 Product lines A Virtual SAN (VSAN) Provides a

Method to Allocate Ports within a Physical Fabric and Create Virtual Fabrics

Page 32: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

SAN Islands – Before VSANs

Production SAN Tape SAN Test SAN

DomainID=8 DomainID=7

SAN B DomainID=2

SAN D DomainID=4

SAN F Domain ID=6

SAN E DomainID=5

SAN C DomainID=3 SAN A

DomainID=1

Page 33: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

SAN Islands – with Virtual SANs

Production SAN Tape SAN Test SAN

SAN B DomainID=2

SAN D DomainID=4

SAN F Domain ID=6

SAN E DomainID=5

SAN C DomainID=3

SAN A DomainID=1

Page 34: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 Hierarchical relationship—  First assign physical ports to VSANs  Then configure independent zones per VSAN

 VSANs divide the physical infrastructure

  Zones provide added security and allow sharing of device ports

 VSANs provide traffic statistics  VSANs only changed when ports

needed per virtual fabric   Zones can change frequently

(e.g., backup)  Ports are added/removed

non-disruptively to VSANs

VSANs and Zones—Complimentary

VSAN 3

Physical Topology VSAN 2

Disk1

Host2 Disk4

Host1

Disk2 Disk3

Disk6 Disk5

Host4

Host3

ZoneA

ZoneB

ZoneC

ZoneA

ZoneD

Relationship of VSANs to Zones

Virtual SANs and Fabric Zoning Are Very Complimentary

Page 35: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 Criteria for forming a PortChannel  Same speed links  Same modes (auto, E, etc.) and states  Between same two switches  Same VSAN membership

  Treated as one logical ISL by upper layer protocols (FSPF)

 Can use up to 16 links in a PortChannel (32 Gbps max)

 Can be formed from any ports on any modules—HA enabled

 Exchange-based in-order load balancing  Mode one: based on src/dst FC_IDs  Mode two: based on src/dst FC_ID/OX_ID

 Much faster recovery than FSPF-based balancing  Given logical interface name with aggregated

bandwidth and derived routing metric

Inter-Switch Link PortChanneling

E.g., 8-Gbps PortChannel (Four x 2 Gbps)

E.g., 4-Gbps PortChannel (Two x 2 Gbps)

A PortChannel Is a Logical Bundling of Identical Links

Page 36: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 ISL = inter-switch link

 PortChannel = E_Ports and ISLs  Trunk = ISLs that support VSANs

 Trunking = TE_Ports and EISLs

PortChannel vs. Trunking

Page 37: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 38: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 38

SAN Design Requirements

Page 39: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 High Availability - Providing a Dual Fabric (current best practice)

 Meeting oversubscription ratios established by disk vendors

 Effective zoning  Providing Business Function/Operating System Fabric

Segmentation and Security

 Fabric scalability (FLOGI and domain-id scaling)  Providing connectivity for virtualized servers

 Providing connectivity for diverse server placement and form factors

SAN Design Key Requirements

Page 40: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 Fibre Channel SAN  Transport and Services are on the same layer in the same devices

 Well defined end device relationships (initiators and targets)

 Does not tolerate packet drop – requires lossless transport

 Only north-south traffic, east-west traffic mostly irrelevant

 Network designs optimized for Scale and Availability

 High availability of network services provided through dual fabric architecture

 Edge/Core vs Edge/Core/Edge

 Service deployment

Client/Server Relationships are

pre-defined

I(c)

I(c) T(s)

T2

I5

I4 I3 I2 I1

I0

T1 T0

Switch Switch

Switch

DNS FSPF

Zone RSCN DNS

FSPF Zone

RSCN

DNS

Zone

FSPF

RSCN

Fabric topology, services and traffic flows are structured

The Design Requirements Classical Fibre Channel

Page 41: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 42: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 42

Typical SAN Designs

Page 43: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 Collapsed Core Design

 Servers connect to the Core switches  Storage devices connect to one or

more core switches  Core switches provide storage

services

 Large amount of blades to support Initiator (Host) and Target (Storage) ports

 Single Management per Fabric

 Normal for Small SAN environments

 HA achieved in two physically separate, but identical, redundant SAN fabrics

FC

Core Fabric A Fabric B

Core

SAN Design – Single Tier Topology

Page 44: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

How Do We Avoid This?

Page 45: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 “Core-Edge” Topology- Most Common

 Servers connect to the edge switches  Storage devices connect to one or

more core switches  Core switches provide storage

services to one or more edge switches, thus servicing more servers in the fabric

 ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained

 HA achieved in two physically separate, but identical, redundant SAN fabrics

FC

Core

Edge

Fabric A Fabric B

Core

Edge

SAN Design – Two Tier Topology

Page 46: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 “Edge-Core-Edge” Topology

 Servers connect to the edge switches  Storage devices connect to one or

more edge switches  Core switches provide storage

services to one or more edge switches, thus servicing more servers and storage in the fabric

 ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained

 HA achieved in two physically separate, but identical, redundant SAN fabrics

FC

Core

Edge

Fabric A Fabric B

Core

Edge

SAN Design – Three Tier Topology

Edge Edge

Page 47: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 48: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 48

Introduction to NPIV/NPV

Page 49: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

What Is NPIV? and Why?  N-Port ID Virtualization (NPIV) provides a means to assign multiple

FCIDs to a single N_Port  Limitation exists in FC where only a single FCID can be handed out per F-port. Therefore and F-Port can only accept a single FLOGI

 Allows multiple applications to share the same Fiber Channel adapter port

 Usage applies to applications such as Virtualization

Applica'on  Server   FC  NPIV  Core  Switch  

Email  

Web  

File  Services  

Email  I/O  N_Port_ID  1  

Web  I/O  N_Port_ID  2  

File  Services  I/O  N_Port_ID  3  

F_Port  

F_Port

N_Port

Page 50: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

What Is NPV? and Why?  N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a “switch” to act

like a server performing multiple logins through a single physical link

 Physical servers connected to the NPV switch login to the upstream NPIV core switch

 No local switching is done on an FC switch in NPV mode

 FC edge switch in NPV mode does not take up a domain ID

 Helps to alleviate domain ID exhaustion in large fabrics

FC  NPIV  Core  Switch  

Eth1/1  

Eth1/2  

Eth1/3  

Server1  N_Port_ID  1  

Server2  N_Port_ID  2  

Server3  N_Port_ID  3  

F_Port  

N-­‐Port  

F-­‐Port  

F-­‐Port  NP-­‐Port  

Applica'on  Server   NPV  Switch  

Page 51: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

NPV Auto Load Balancing Automatic Balancing of Load on NP Links

 Uniform balancing of server loads on NP links  Server loads are not tied to any uplink

 Benefit  Optimal uplink bandwidth utilization

Blade 1

Blade 4

Blade Server Chassis

Blade 2

SAN

Balanced load on NP links

Blade 3

1 3

2 4

Page 52: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

NPV Auto Load Balancing Automatic Distribution Loads onto NP Links

 Re-distribution of the loads among available links   When the failed link comes up

  Potentially disruptive

  Not a default behavior

  One time or explicitly configurable using CLI command/DM

  Original distribution not maintained

 Benefits   No “left-out” NP links

  Optimal use of uplink bandwidth

  Scheduled traffic disruption

Blade 1

Blade 4

Blade Server Chassis

Blade 2

SAN

Blade 3

Disrupted servers re-login on other uplink

1 2

3 4

npv auto-load-balance disruptive

Page 53: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

F-Port Port Channel Enhance NPV uplink Resiliency

  F-Port PortChannels  Bundle multiple ports in to 1 logical link  Similar to ISL portchannels in FC and EtherChannels in Ethernet

  Benefits  High-Availability- no disruption if cable, port, or line cards fail  Optimal bandwidth utilization & higher aggregate bandwidth with load balancing

Storage

Bla

de S

yste

m

Blade 1

Blade 2

Blade N

F-Port Port Channel

F-Port N-Port

Core Director

SAN

Interface port-channel 1 no shut

Interface fc1/1 channel-group 1

Interface fc1/2 channel-group 1

Page 54: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

F-Port Trunking Extend VSAN Benefits to Blades

  F-Port Trunking  Uplinks carry multiple VSANs

  Benefits  Extend VSAN benefits to Blade servers  Separate management domains  Traffic Isolation and ability to host differentiated services on blades  

Storage

Bla

de S

yste

m Blade N

Core Director

VSAN 1

VSAN 2

VSAN 3

F-Port Trunking on

F-Port Channel

F-Port N-Port

SAN

NPV

Interface fc1/1 trunk mode on trunk allowed-vsan 1-3

Interface port-channel 1 trunk mode on trunk allowed-vsan 1-3

Blade 2

Blade 1

Page 55: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Application Protection and Consolidation Using VSANs and F-Port Trunking

Bla

de S

yste

m Blade 1

Blade 2

Blade N

F-Port Portchannel with Trunking

F-Port N-Port

NPV

 Security and Services Differentiation for blades

 Better application protection and isolation

 Ability to host different application on blades in different VSANs

ERP

Email

WEB

ERP

Web

E-Mail

Page 56: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 57: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 57

Core-Edge Design Review

Page 58: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 “Edge-Core” Topology- Most Common

 Servers connect to the edge switches  Storage devices connect to one or

more core switches  Core switches provide storage

services to one or more edge switches, thus servicing more servers in the fabric

 ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained

 HA achieved in two physically separate, but identical, redundant SAN fabrics

FC

Core

Edge

Fabric A Fabric B

Core

Edge

SAN Design – Two Tier Topology

Page 59: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Core-Edge Design Options

 Consolidation options  High density directors in core

 Fabric switches at edge

 Directors or blade switches on edge

 Scalable growth up to core and ISL capacity

A

B A A

A

B

B

B

Page 60: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

A B

Server Consolidation with Top-of-Rack Fabric Switches

2X ISL to Core at 10 G

32 Host Ports at 4 G

96 Storage Ports at 2 G

28 ISL to Edge at 10 G

14 Racks 32 Dual Attached Servers per Rack

AB

Top of Rack

Top of Rack Design

MDS 91XXs Ports Deployed:

Storage Ports (4 G Dedicated):

Host Ports (4 G Shared):

Disk Oversubscription (Ports):

Number of FC switches in the fabric

1200

192

896

9.3 : 1

30

Page 61: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Server Consolidation with Blade servers

2X ISL to Core at 4G

16 Host Ports at 4G

120 Storage Ports at 2 G

60 ISL to Edge at 4 G

Five Racks 96 Dual Attached Blade Servers per Rack

A B

A B

Blade Servers

Blade Server Design Using 2 x 4 G ISL per Blade Switch;

Less cables/power

Ports Deployed:

Storage Ports (4 G Dedicated):

Host Ports (4 G Shared):

Disk Oversubscription (Ports):

Number of FC switches in the fabric

1608

240

480

8 : 1

62

Page 62: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Explosion of Fabric/Blade Switch in the Edge

 Scalability  Each fabric/blade Switch uses a single Domain ID (Tested 75 per fabric)

 Theoretical maximum number of Domain IDs is 239 per VSAN

 Supported number of domains is quite smaller (depends on storage vendors)

 Manageability  More switches to manage

 Shared management of blade switches between storage and server administrators

Page 63: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 64: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 64

Recommended Core-Edge Designs for Scale and Availability

Page 65: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 High Availability - Providing a Dual Fabric (current best practice)

 Fabric scalability (FLOGI and domain-id scaling)

 Providing connectivity for diverse server placement and form factors

 Meeting oversubscription ratios established by disk vendors

 Effective zoning  Providing Business Function/Operating System Fabric

Segmentation and Security  Providing connectivity for virtualized servers

SAN Design Key Requirements

Page 66: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

A B

N-Port Virtualizer (NPV) Reduces Number of FC Domain IDs

2 ISL to Core at 10 G

32 Host Ports at 4 G

96 Storage Ports at 2 G

28 ISL to Edge at 10 G

14 Racks 32 Dual Attached Servers per Rack

AB

Top of Rack Top of Rack Design

Fabric Switches in NPV mode

NPV wizard for the deployment

MDS 91xx in NPV mode

Ports Deployed:

Storage Ports (4 G Dedicated):

Host Ports (4 G Shared):

Disk Oversubscription (Ports):

Number of FC switches in the fabric

1200

192

896

9.3 : 1

2

NPIV Core

Page 67: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

NPV Reduces Number of FC Blade Switches for Server Consolidation

2 ISL to Core at 4G

16 Host Ports at 4G

120 Storage Ports at 2 G

60 ISL to Edge at 4 G

Five Racks 96 Dual Attached Blade Servers per Rack

A B

A B

Blade Servers

Blade Server Design Using 2 x 4 G ISL per Blade Switch;

Less cables/power

Ports Deployed:

Storage Ports (4 G Dedicated):

Host Ports (4 G Shared):

Disk Oversubscription (Ports):

Number of fabric switches to manage

1608

240

480

8 : 1

2

NPIV Core

MDS 91xx in NPV mode

Page 68: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 High Availability - Providing a Dual Fabric (current best practice)

 Fabric scalability (FLOGI and domain-id scaling)

 Providing connectivity for diverse server placement and form factors

 Meeting oversubscription ratios established by disk vendors

 Effective zoning  Providing Business Function/Operating System Fabric

Segmentation and Security  Providing connectivity for virtualized servers

SAN Design Key Requirements

Page 69: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

SAN Design FAN-OUT Ratio  Use port-channeling/trunking to enhance bandwidth available

between devices  Factors used

 Speed of Host HBA interfaces

 Speed of Array interfaces

 Type of server and application

 Keep ISL Oversubscription ratio lower than Array oversubscription ratio

 Ratios range between 4:1 - 20:1

FC

6 x 4G Array ports 3 x 8G ISL ports

Example: 10:1 O/S ratio

60 Servers with 4 Gb HBAs

240 G 24 G 24 G

Page 70: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

F-Port Port Channel Enhance NPV Uplink Resiliency

  F-Port PortChannels Bundle multiple ports in to 1 logical link Similar to ISL portchannels in FC and EtherChannels in Ethernet

  Benefits High-Availability- no disruption if cable, port, or line cards fail Optimal bandwidth utilization & higher aggregate bandwidth with load balancing

Storage

Bla

de S

yste

m

Blade 1

Blade 2

Blade N

F-Port Port Channel

F-Port N-Port

Core Director

SAN

Interface port-channel 1 no shut

Interface fc1/1 channel-group 1

Interface fc1/2 channel-group 1

Page 71: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 High Availability - Providing a Dual Fabric (current best practice)

 Fabric scalability (FLOGI and domain-id scaling)

 Providing connectivity for diverse server placement and form factors

 Meeting oversubscription ratios established by disk vendors

 Effective zoning  Providing Business Function/Operating System Fabric

Segmentation and Security  Providing connectivity for virtualized servers

SAN Design Key Requirements

Page 72: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Zones/ZoneSet

 Create Device-Alias for End Devices

 Create a readable name for end devices tied to their PWWN

 As device moves between VSANs their Device Alias stays the same

 Create 2 Member Zones

 Hardware zoning on MDS

 Recommended to have more zones with 2 members in larger SANs

 Single Management of Zones/Zoneset per Fabric

 Use Distribute full Zoneset command per VSAN to keep from isolation in

Basic Zoning or use Enhanced Zoning.

 If a device has different active members the ISL will become isolated

Page 73: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 High Availability - Providing a Dual Fabric (current best practice)

 Fabric scalability (FLOGI and domain-id scaling)

 Providing connectivity for diverse server placement and form factors

 Meeting oversubscription ratios established by disk vendors

 Effective zoning  Providing Business Function/Operating System Fabric

Segmentation and Security  Providing connectivity for virtualized servers

SAN Design Key Requirements

Page 74: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Virtual SANs

 Consolidate SAN Islands into OS or Department VSANs

 Reduction of SAN islands into a single Fabric while keeping Isolation

 Example is to have Test, Development and Production in their own VSANs

 Separate Tape or SAN extension VSANs

 Security

 Create Separate Administrative Roles per VSANs

 Use TACACS+ for authorization and auditing of Switches

Page 75: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

VM-Aware SANs

Support for Port WWNs for Virtual Machine Hosted on Blade Servers

Page 76: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 It is important to follow the guidelines from the virtual machine vendors for assigning port WWNs to virtual machines. For example, VMware requires the use of Raw Device Mode (RDM) instead of Virtual Machine File System (VMFS) to get access to raw LUNs.

 Using NPIV/Nested NPV with RDM we can give QOS, incident isolation (VSANS) and visibility into a Virtualized environment

 Individual LUNS vs. shared with the use of RDM with NPIV/Nested NPV

VM-Aware SANS

Page 77: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Summary of Recommendations

 High Availability - Provide a Dual Fabric

 Use of Port-Channels and F-Port Channels with NPIV to

provide the bandwidth to meet oversubscription ratios

 Hardware zoning with single Zoneset management per

fabric is key (2 member zones is recommended-Hard

Zoning)

 Use VSANs to provide OS or Business Function

Segmentation

 Use NPIV/NPV to provide Domain ID scaling and ease of

management

 Use of host level NPIV and Nested NPV to provide

visibility to Virtualized servers

Page 78: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

A B

Recommended TOR Core-Edge Design

NPIV Core

14 Racks 32 Dual Attached Servers per Rack

AB

Top of Rack Top of Rack Design:

NPIV Core

Fabric Edge Switches in NPV mode

MDS Edge in NPV mode

Page 79: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Recommended Blade Switch Core-Edge Design

NPIV Core

Five Racks NPV Edge 96 Dual Attached Blade Servers per Rack

A B

A BBlade Server Design Using Core-NPIV

Edge-NPV

Top of Rack Blade Switch Design:

NPIV Core

Edge Blade Switches in NPV mode

Page 80: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 History of Storage Area Networks

 Fibre Channel Basics  SAN Design Requirements

 Introduction to Typical SAN Designs  Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability  Next Generation Core-Edge Designs

Session Agenda

Page 81: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 81

Next Generation SAN Design Best Practices

Page 82: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

 10Gbps Ethernet

 Lossless Ethernet  Matches the lossless behavior guaranteed in FC by B2B credits

 Ethernet jumbo frames  Max FC frame payload = 2112 bytes

Fibre Channel over Ethernet What Enables It? Et

hern

et

Hea

der

FCoE

H

eade

r

FC

Hea

der

FC Payload

CR

C

EOF

FCS

Same as a physical FC frame

Control information: version, ordered sets (SOF, EOF)

Normal ethernet frame, ethertype = FCoE

Page 83: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

FC Traffic FC HBA

Unified Fabric Why?

 Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs

 Limited number of interfaces for Blade Servers

All traffic goes over 10GE

CNA

CNA

FC Traffic FC HBA

NIC LAN Traffic

NIC LAN Traffic

NIC Mgmt Traffic

NIC Backup Traffic

IPC Traffic HCA

Page 84: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Today’s Unified I/O Architecture

Ethernet FC

LAN SAN B SAN A

Today I/O Consolidation with FCoE

SAN B LAN SAN A

FCoE

Nexus 5000/UCS

Page 85: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Nexus 5K FCOE Switches Also Use NPV to Achieve Both Server and IO Consolidation

A B

Attached Servers per Rack

AB

Nexus 5K/2k in NPV mode

LAN Core

32 Host CNA Ports at 10G

Core connectivity using FC modules on N5K

Page 86: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Cisco UCS Fabric Interconnects Also Use NPV

A B

UCS 6100 fabric Interconnect in NPV mode inside UCS 5100 blade chassis

LAN Core

Page 87: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

UCS Core-Edge Design N-Port Virtualization Forwarding with MDS

  F_Port Channeling and Trunking from MDS to UCS

  FC Port Channel behaves as one logical uplink

  FC Port Channel can carry all VSANs (Trunk)

  UCS Fabric Interconnects remains in NPV end host mode

  Server vHBA pinned to an FC Port Channel

  Server vHBA has access to bandwidth on any link member of the FC Port Channel

  Load balancing based on FC Exchange_ID

 Per Flow

  Loss of Port Channel member link has no effect on Server vHBA (hides the failure)

 Affected flows to remaining member links

 No FLOGI required

SAN B SAN A

Server 1 VSAN 1

vFC 1 vFC 1

N_Proxy

F_Proxy

N_Port

6100-A 6100-B

F_Port

vFC 2 vFC 2

Server 2 VSAN 2

vHBA 1

vHBA 0

vHBA 1

vHBA 0

VSAN 1,2

VSAN 1,2

F_ Port Channel & Trunk

NPIV NPIV

Page 88: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Nexus 5X00 Core_Edge F_Port Trunking and Channeling  Nexus 5000 access switches

operating in NPV mode

 With NX-OS release 4.2(1) Nexus 5000 supports F-Port Trunking and Channeling on the links between an NPV device and upstream FC switch (NP port -> F port)

 F_Port Trunking: Better multiplexing of traffic using shared links (multiple VSANs on a common link)

 F_Port Channeling: Better resiliency between NPV edge and Director Core

 No host re-login needed per link failure

 No FSPF recalculation due to link failure

 Simplifies FC topology (single uplink from NPV device to FC director)

Fabric ‘A’ Supporting

VSAN 20 & 40

F Port Trunking & Channeling

VLAN 10,50

VLAN 10,30

VSAN 30,50

Fabric ‘B’ Supporting

VSAN 30 & 50

VF

VN

TF

TNP

Server ‘1’ VSAN 20 & 30

Server ‘2’ VSAN 40 & 50

Nexus 5000 NPV

Page 89: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Fibre Channel Aware Device FCoE NPV

FCF

  What does an FCoE-NPV device do?   ”FCoE NPV bridge" improves over a "FIP

snooping bridge" by intelligently proxying FIP functions between a CNA and an FCF

  Active Fibre Channel forwarding and security element

  FCoE-NPV load balance logins from the CNAs evenly across the available FCF uplink ports

  FCoE NPV will take VSAN into account when mapping or ‘pinning’ logins from a CNA to an FCF uplink

  Emulates existing Fibre Channel Topology (same mgmt, security, HA)

  Avoids Flooded Discovery and Configuration (FIP & RIP)

Fibre Channel Configuration and Control Applied at the

Edge Port

Proxy FCoE VLAN Discovery

Proxy FCoE FCF Discovery

FCoE NPV

VF

VNP

Page 90: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Servers, FCoE attached Storage

FCoE Multi-Tier Larger Fabric Multi-Hop Topologies

  Multi-hop edge/core/edge topology   Core SAN switches supporting FCoE

  N7K with DCB/FCoE line cards   MDS with FCoE line cards (Sup2A)

  Edge FC switches supporting either   N5K - E-NPV with FCoE uplinks to

the FCoE enabled core (VNP to VF)

  N5K or N7K - FC Switch with FCoE ISL uplinks (VE to VE)

  Scaling of the fabric (FLOGI, …) will most likely drive the selection of which mode to deploy

N7K or MDS FCoE enabled Fabric

Switches

FC Attached Storage

Servers

VE

Edge FCF Switch Mode

VE

Edge Switch in E-NPV

Mode

VF

VNP VE

VE

Page 91: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• BRKCOM-2002-UCS Supported Storage Architectures and Best

Practices with Storage

• BRKDCT-1044-FCoE for the IP Network Engineer

• BRKSAN-2704- Storage Area Network Extension Design and

Operation

Reference Sessions

Page 92: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Recommendations

•  NX-OS and Cisco Nexus Switching (ISBN: 1587058928), by David Jansen, Ron Fuller, Kevin Corbin. Cisco Press 2010.

•  Storage Networking Fundamentals(ISBN-10:1-58705-162-1; ISBN-13: 978-11-58705-162-3), by Marc Farley. Cisco Press. 2007.

•  Storage Networking Protocol Fundamentals (ISBN: 1-58705-160-5), by James Long, Cisco Press. 2006.

Recommended Reading

Available Onsite at the Cisco Company Store

Page 93: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• Receive 25 Cisco Preferred Access points for each session evaluation you complete.

• Give us your feedback and you could win fabulous prizes. Points are calculated on a daily basis. Winners will be notified by email after July 22nd.

• Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center.

• Don’t forget to activate your Cisco Live and Networkers Virtual account for access to all session materials, communities, and on-demand and live activities throughout the year. Activate your account at any internet station or visit www.ciscolivevirtual.com.

Complete Your Online Session Evaluation

Page 94: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 94

Visit the Cisco Store for Related Titles

http://theciscostores.com

Page 95: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Page 96: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Thank you.

Page 97: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 97

Reference Slides

Page 98: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

NPV Traffic Engineering

• Allows user to select external interface per server interface

• Benefits Allows customized Bandwidth Management

Allows use of shortest path

Enables use of Persistent FCIDs

Blade 1

Blade N

Blade Server Chassis

Storage

Blade 2

….

Traffic-map

SAN

Customized Traffic Pattern

1 2

N

npv traffic-map server-interface fc1/2 external-interface fc1/1 npv traffic-map server-interface fc1/3 external-interface fc1/1 ……. npv traffic-map server-interface fc1/N external-interface fc1/5

Fc1/3 Fc1/4 Fc1/24

Fc1/1 Fc1/5

Page 99: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Number of NPIV Logins: MDS 9200/9500

Type of Logins Verified Logins Logins per Port 126 (1) / 256 (2)

Logins per Line Card 400

Logins per Switch 2,000 Logins per physical fabric 10,000

These are the number of logins allowed on all Gen1, Gen2 and Gen3 line cards. The limits applied to on a per switch will also apply to all MDS 9200 and MDS 9500. MDS 9124/9134 and Blade switches will have different limits and will be shown later.

(1) SAN-OS 3.x, NX-OS 4.1(1) (2) NX-OS 4.1(2)

Page 100: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Number of NPIV Logins: MDS 9124/9134 and Blade Switches

Switching Mode NPV Mode

Logins per Port 42 (1) / 89 (2) 114

Logins per Port-Group 168 114

Logins per MDS 9124 1008 684

Logins per MDS 9134 1680 1140

Logins per MDS 9124e 1008 684

Logins per IBM Blade Switch 840 570

The stated numbers are verified / supported number of logins.

(1) Using 2 member zoning (2) Using default zone-permit instead of zoning

Page 101: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

MDS 9000 Family Buffer to Buffer Credits Shared Dedicated

FX-Port (Fixed) FX-Port

(Default)

E-Port (Default)

FX-Port / E-Port

(Min-Max)

Extended Credits

(Min-Max)

Speed (Gbps)

Max Distance (*)

(km) Disable Ports for Max Credits?

18/4 Port 1/2/4-Gbps 16 16 250 2-250 256-4,095

1 8,190

No 2 4,095

9222i Fabric Switch 4 2,047

4 Port 10-Gbps N/A 16 750 2-750 751-4,095 10 683 No

4/44 Port 1/2/4/8-Gbps

32 32

125 2-250 251-4095 1 8190

No 2 4095

48 Port 1/2/4/8-Gbps 250 2-500 501-4095

4 2047

24 Port 1/2/4/8-Gbps 500 8 1023

9148 8G Fabric Switch N/A 32 32 1-125 N/A

1 250

No 2 125

4 62

8 31

9124/9134 Fabric Switch

N/A 16 16 1-61 N/A

1 122

No 2 61

4 30 (*) Assuming max frame size

Page 102: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 102

Introduction to FCOE

Page 103: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

•  10Gbps Ethernet

•  Lossless Ethernet Matches the lossless behavior guaranteed in FC by B2B credits

• Ethernet jumbo frames Max FC frame payload = 2112 bytes

Fibre Channel over Ethernet What Enables It? Et

hern

et

Hea

der

FCoE

H

eade

r

FC

Hea

der

FC Payload

CR

C

EOF

FCS

Same as a physical FC frame

Control information: version, ordered sets (SOF, EOF)

Normal ethernet frame, ethertype = FCoE

Page 104: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Unified Fabric IEEE DCB

Standard / Feature Status of the Standard IEEE 802.1Qbb Priority-based Flow Control (PFC)

In Sponsor Ballot

IEEE 802.3bd Frame Format for PFC

In Sponsor Ballot

IEEE 802.1Qaz Enhanced Transmission Selection (ETS) and Data Center Bridging eXchange (DCBX)

Just completed WG recirculation ballot. New recirculation expected next week in order to go to Sponsor Ballot after the May interim

IEEE 802.1Qau Congestion Notification Done! IEEE 802.1Qbh Port Extender In its first task group ballot

  Developed by IEEE 802.1 Data Center Bridging Task Group (DCB)

  All technically stable

  Final standards expected by mid 2010

CEE (Converged Enhanced Ethernet) is an informal group of companies that submitted initial inputs to the DCB WGs.

Page 105: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

FC Traffic FC HBA

Unified Fabric Why?

•  Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs

•  Limited number of interfaces for Blade Servers

All traffic goes over 10GE

CNA

CNA

FC Traffic FC HBA

NIC LAN Traffic

NIC LAN Traffic

NIC Mgmt Traffic

NIC Backup Traffic

IPC Traffic HCA

Page 106: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

What’s the difference between DCE, CEE and DCB ?

  All three acronyms describe the same thing, meaning the architectural collection of Ethernet extensions (based on open standards)

  Cisco has co-authored many of the standards associated and is focused on providing a standards-based solution for a Unified Fabric in the data center

  The IEEE has decided to use the term “DCB” (Data Center Bridging) to describe these extensions to the industry. http://www.ieee802.org/1/pages/dcbridges.html

Page 107: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Priority Flow Control Fibre Channel over Ethernet Flow Control

Packet

R_R

DY

Fibre Channel Transmit Queues Ethernet Link Receive Buffers

Eight Virtual Lanes

One One

Two Two

Three Three

Four Four

Five Five

Seven Seven

Eight Eight

Six Six

STOP PAUSE

B2B Credits

  Enables lossless Ethernet using PAUSE based on a COS as defined in 802.1p

  When link is congested, CoS assigned to FCoE will be PAUSEd so traffic will not be dropped

  Other traffic assigned to other CoS will continue to transmit and rely on upper layer protocols for retransmission

Page 108: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

DCB “Virtual Links” An Example

VL1 VL2 VL3

LAN/IP Gateway

VL1 – LAN Service – LAN/IP VL2 - No Drop Service - Storage

Ability to support QoS queues within the “lanes”

Campus Core/ Internet

Storage Area Network

Page 109: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Fiber Channel over Ethernet Protocol Host Side – FIP and DCBX Configuration

1st portion of the MAC is the FC-MAP of the Nexus 5000

FC-MAP (0E-FC-xx)

FC-ID 7.8.9

FC-MAC Address

FC-MAP (0E-FC-xx)

FC-ID 10.00.01

2nd portion of the MAC is the FC-ID

Page 110: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

  FCF (FCoE Forwarder): A Fibre Channel switching element that is able to forward FCoE frames (Nexus 5000, Nexus 7000, MDS 9000)

  FPMA : ”Fabric Provided MAC Address” -- A unique MAC address that is assigned by an FCF to a single Enode

  Enode : “End Node” -- a Fiber Channel end node that is able to transmit FCoE frames using one or more ENode MACs.

  FCoE Pass-Through : any DCB device capable of passing FCoE frames to an FCF   FIP Snooping Bridge   FCoE N-Port Virtualizer

  Single hop FCoE : running FCoE between the host and the first hop access level switch

  Multi-hop FCoE : the extension of FCoE beyond a single hop into the Aggregation and Core layers of the Data Centre Network

FCoE Building Blocks The Acronyms Defined

Page 111: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• FCF (Fibre Channel Forwarder) is the Fibre Channel switching element inside an FCoE switch

Fibre Channel logins (FLOGIs) happens at the FCF

Consumes a Domain ID

• FCoE encap/decap happens within the FCF Forwarding based on FC information

Eth port

Eth port

Eth port

Eth port

Eth port

Eth port

Eth port

Eth port

Ethernet Bridge

FC port

FC port

FC port

FC port

FCF

FCoE Switch FC Domain ID : 15

FCoE Building Blocks Fibre Channel Forwarder

Page 112: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Fibre Channel Drivers

Ethernet Drivers

Operating System

PCIe

Ethernet

Fibre Channel

10GbE

10GbE

Link

Ethernet Driver bound to Ethernet NIC PCI address

FC Driver bound to FC

HBA PCI address

  Replaces multiple adapters per server, consolidating both Ethernet and FC on a single interface

  Appears to the operation system as individual interfaces (NICs and HBAs)

  First Generation CNAs from support PFC and CIN-DCBX

  Second Generation CNAs support PFC, CEE-DCBX as well as FIP   Single chip implementation   Half Height/Length   Half power consumption

FCoE Building Blocks Converged Network Adapter

Page 113: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Translate to FCoE…   Same host to target communication

  Host has 2 CNA’s (one per fabric)   Target has multiple ports to

connect to fabric   Connect to a capable switch

  Port Type Negotiation (FC port type will be handled by FIP)

  Speed Negotiation   DCBX Negotiation

  Access switch is a Fibre Channel Forwarder (FCF)

  Dual fabrics are still deployed for redundancy

FC

CNA

FC Fabric

ENode

Target

Ethernet Fabric

DCB capable Switch acting as an FCF

Unified Wire

Page 114: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

VE_Port

VF_Port

VF_Port

VE_Port

VN_Port

VN_Port

Fibre Channel over Ethernet Port Types

Fibre Channel over Ethernet Switch

FCoE_NPV

Switch VF_Port VNP_Port FCF

Switch

End Node

End Node

FCoE Switch : FCF

**Available NOW

**E Rocks Timeframe **EagleHawk + Timeframe

Page 115: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

My Port Is Up…Can I Talk Now? FIP and FCoE Login Process

VN_Port

VF_Port

FIP Discovery

E_ports or VE_Port

  Step 1: FIP Discovery Process   enables FCoE adapters to discover

which VLAN to transmit & receive FCoE frames

  enables FCoE adapters and FCoE switches to discovers other FCoE capable devices

  verifies Lossless Ethernet is capable of FCoE transmit

  Step 2: FIP Login Process   Simular to existing Fibre Channel Login

process - sends FLOGI to upstream FCF   adds the negotiation of the MAC address

to use Fabric Provided MAC Address (FPMA)

  FCF assigns the host a Enode MAC address to be used for FCoE forwarding

FC

CNA

FC or FCoE Fabric

Target

ENode **Multi-hope FCoE with VE_Ports not supported until Eaglehawk Release

Page 116: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 116

Enode MAC Address Fibre Channel over Ethernet Addressing Scheme

  Enode MAC assigned for each FCID   Enode MAC composed of a FC-MAP and FCID

  FC-MAP is the upper 24 bits of the Enode’s MAC

  FCID is the lower 24 bits of the Enode’s MAC

  FCoE forwarding decisions still made based on FSPF and the FCID within the Enode MAC

FC Fabric

Domain ID

FC-MAP (0E-FC-xx)

FC-ID 7.8.9

FC-MAC Address

FC-MAP (0E-FC-xx)

FC-ID 10.00.01

Fibre Channel FCID Addressing

Page 117: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Login Complete…Almost There Fabric Zoning

FC

FC/FCoE Fabric

Initiator

Target

  FCoE fabric zoning done the same as FC fabric zoning

  Zoning is enforced at the FCF   Zoning can be configured on the Nexus

5000 using the CLI or Fabric Manager   If Nexus 5000 is an “FCoE Pass-

Through” device, zoning will be configured on the upstream core switch and pushed to the Nexus 5000

zone name EXAMPLE_Zone * fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [tnitiator] * fcid 0x11.00.01 [pwwn 50:06:01:61:3c:e0:1a:f6] [target] pwwn 10:00:00:00:c9:76:fd:31

pwwn 50:06:01:61:3c:e0:1a:f6

**Multi-hope FCoE with VE_Port not supported until Eaglehawk

FCF with Domain ID 10

Page 118: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

  Host connected over unified wire to first hop access switch   Access switch (Nexus 5000) is the

FCF   Fibre Channel ports on the access

switch can be in NPV or Switch mode for native FC traffic

  DCBX is used to negotiate the enhanced Ethernet capabilities

  FIP is use to negotiate the FCoE capabilities as well as the host login process

  FCoE runs from host to access switch FCF – native Ethernet and native FC break off at the access layer

FC

CNA

FC Fabric

ENode

Target

Ethernet Fabric

DCB capable Switch acting as an FCF

Unified Wire

Single Hop Design Today’s Solution

Page 119: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Fabric A

CEE-DCBX

Generation 1 CNA

CIN-DCBX

Generation 2 CNA

Fabric B LAN Fabric

VN

VF Direct attach VN_Port to

VF_Port

  Generation 1 CNA

  limited to direct attached CNAs at the access

  Utilized Cisco, Intel, Nuova Data Center Bridging Exchange protocol (CIN-DCBX)

  Generation 2 CNA

  Utilizes Converged Enhanced Ethernet Data Center Bridging Exchange protocol (CEE-DCBX)

  Utilizes FCoE Initialization Protocol (FIP) as defined by the T.11 FC-BB-5 specification

  Supports both direct and multi-hop attachment (through a Nexus 4000 FIP Snooping Bridge)

Single Hop Design The CNA Point of View

Nexus 5000 FCF-A

Nexus 5000 FCF-A

Page 120: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Unified Fabric with FCoE CNA: Converged Network Adapter

• Standard drivers

• Same management • Operating System sees:

Dual port 10 Gigabit Ethernet adapter

Dual Port 4 Gbps Fibre Channel HBAs

Page 121: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

vfc2

Eth1/2 PC1

vfc1

vfc3

mac-address

The Virtual Fibre Channel Interface Binding the vfc

  Virtual Fibre Channel Interface (vfc) : Where is the logical Fibre Channel wire is terminated (the FCF)

  Today this corresponds to an “F_Port”

  Three options for binding a vfc interface :   Physical Interface: Direct Attach CNAs and FCoE_NPV devices (future)

  Single link port-channel: Direct Attach CNAs connected via a two port vPC

  MAC-Address over an Ethernet Cloud: through a FIP-Snooping Device

Nexus 5000 FCF

Nexus 4000 FIP-Snooping

Nexus 5000 FCF

Page 122: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

VLAN 10,30

VLAN 10,20

  FCoE VLANs are treated differently than native Ethernet VLANs

  No flooding, MAC learning, broadcasts, etc.

  The FCoE VLAN must not be configured as a native VLAN

  FIP uses native VLAN

  FCoE VLANs should not be configured on Ethernet links that are not carrying FCoE traffic

  Unified Wires must be configured as trunk ports and STP edge ports

! VLAN 20 is dedicated for VSAN 2 FCoE traffic (config)# vlan 20 (config-vlan)# fcoe vsan 2

VSAN 2

STP Edge Trunk

Fabric A Fabric B LAN Fabric

Nexus 5000 FCF

Nexus 5000 FCF

VSAN 3

Single Hop Design The FCoE VLAN

Page 123: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

VLAN 10,30

VLAN 10,20

  FCoE Fabric ‘A’ will have a different VLAN topology than FCoE Fabric ‘B’ which are different from the LAN Fabric

  PVST+ allows unique topology per VLAN

  MST requires that all switches in the same Region have the same mapping of VLANs to instances

  MST does not require that all VLANs be defined in all switches

  A separate instance must be used for FCoE VLANs

  Recommended: three separate instances – native Ethernet VLANs, SAN ‘A’ VLANs and SAN ‘B’ VLANs

spanning-tree mst configuration name FCoE-Fabric revision 5 instance 5 vlan 1-19,40-3967,4048-4093 instance 10 vlan 20-29 instance 15 vlan 30-39

Fabric A Fabric B LAN Fabric

VSAN 3 VSAN 2

VLAN 10

Nexus 5000 FCF-A

Nexus 5000 FCF-B

Single Hop Design The FCoE VLAN and STP

Page 124: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• VSANs use VLAN hardware table resources

• FCoE requires a VLAN and a VSAN that you bind the VLAN to. • Hence for each FCoE VSAN you should count using 2 VLANs

• Enabling FCoE burns two internal VSAN/VLAN resources •  vFC binds to the Port-Channel, as long as there is one single port in

the port-channel attached to the switch

VLANs and VSANs FCoE Considerations

Page 125: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

•  Optimal layer 2 LAN design often leverages Multi-Chassis Etherchannel (MCEC)

•  Nexus utilizes Virtual Port-Channel (vPC) to enable MCEC either between switches or to direct attached servers (using LACP or static port-channels)

•  MCEC provides network based load sharing and redundancy without introducing layer 2 loops in the topology

•  MCEC maintains the separation of LAN and SAN high availability topologies

FC maintains separate SAN ‘A’ and SAN ‘B’ topologies

LAN utilizes a single logical topology Direct Attach vPC Topology

MCEC

vPC Peers

vPC Peer Link

Fabric A Fabric B LAN Fabric

Nexus 5000 FCF-A

Nexus 5000 FCF-B

Single Hop Design What is MCEC??

Page 126: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

•  vPC enabled topologies with FCoE must follow specific design and forwarding rules…

•  A ‘vfc’ interface can only be associated with a single-port port-channel

•  While the port-channel configurations are the same on N5K-1 and N5K-2, the FCoE VLANs are different

•  vPC configuration works with Gen-2 FIP enabled CNAs ONLY

•  FCoE VLANs are ‘not’ carried on the vPC peer-link

•  FCoE and FIP ethertypes are ‘not’ forwarded over the vPC peer link

Direct Attach vPC Topology

VLAN 10,30

VLAN 10,20 STP Edge Trunk

VLAN 10 ONLY HERE!

Fabric A Fabric B LAN Fabric

Nexus 5000 FCF-A

Nexus 5000 FCF-B

Single Hop Design Unified Wires and MCEC

vPC contains only 2 X 10GE links – one to each Nexus 5000

Page 127: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• Multi-hop FCoE networks allow for FCoE traffic to extend past the access layer (first hop)

•  In Multi-hop FCoE the role of a transit Ethernet bridge needs to be evaluated

Avoid Domain ID exhaustion

Ease management

• FIP Snooping is a minimum requirement suggested in FC-BB-5

• Fibre Channel over Ethernet NPV (FCoE-NPV) is a new capability intended to solve a number of design and management challenges

FCF

FCF

SAN B SAN A

DCB Capable Ethernet Switch

DCB Capable Ethernet Switch

VN

VN

VF

VF

Multi - Hop Design FCoE Pass-Through Options

Page 128: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121 128

CLI Configuration Sample

Page 129: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

Sample Working Topology

NPV Core

NPV Edge

Page 130: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

NPV Edge Switch: pod7-5020-51(config)# feature fcoe

FC license checked out successfully

fc_plugin extracted successfully

FC plugin loaded successfully

FCoE manager enabled successfully

FC enabled on all modules successfully

pod7-5020-51(config)# feature npv

Verify that boot variables are set and the changes are saved. Changing to npv mode erases the current

configuration and reboots the switch in npv mode. Do you want to continue? (y/n):y

NPV Core Switch: pod3-9216i-70(config)# feature npiv

pod3-9216i-70(config)# feature fport-channel-trunk

Admin trunk mode has been set to off for

1- Interfaces with admin switchport mode F,FL,FX,SD,ST in admin down state

2- Interfaces with operational switchport mode F,FL,SD,ST.

Enabling the Required Features

Page 131: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• NPV Edge Switch:

pod7-5020-51(config)# vsan database

pod7-5020-51(config-vsan-db)# vsan 10

• NPV Core Switch:

pod3-9216i-70(config)# vsan database

pod3-9216i-70(config-vsan-db)# vsan 10

pod3-9216i-70(config-vsan-db)# vsan 10 interface fc1/12

Configure the VSANs

Page 132: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• NPV Core Switch:

pod3-9216i-70(config)# interface port-channel 1

pod3-9216i-70(config-if)# switchport mode f

pod3-9216i-70(config-if)# switchport trunk mode on

pod3-9216i-70(config-if)# channel mode active

pod3-9216i-70(config-if)# interface fc2/13, fc2/19

pod3-9216i-70(config-if)# switchport mode f

pod3-9216i-70(config-if)# switchport rate-mode dedicated

pod3-9216i-70(config-if)# switchport trunk mode on

pod3-9216i-70(config-if)# channel-group 100 force

fc2/13 fc2/19 added to port-channel 100 and disabled

please do the same operation on the switch at the other end of the port-channel, then do "no shutdown" at both end to bring them up

pod3-9216i-70(config-if)# no shut

Configure Trunking F_Port Port Channel

Page 133: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

• NPV Edge Switch:

pod7-5020-51(config)# interface san-port-channel 1

pod7-5020-51(config-if)# switchport mode np

pod7-5020-51(config-if)# switchport trunk mode on

pod7-5020-51(config-if)# interface fc2/1-2

pod7-5020-51(config-if)# switchport mode np

pod7-5020-51(config-if)# switchport trunk mode on

pod7-5020-51(config-if)# channel-group 1

fc2/1 fc2/2 added to port-channel 1 and disabled

please do the same operation on the switch at the other end of the port-channel, then do "no shutdown" at

both ends to bring it up

pod7-5020-51(config-if)# no shut

Configure Trunking F_Port Port Channel

Page 134: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

pod7-5020-51(config)# vlan 10 pod7-5020-51(config-vlan)# fcoe vsan 10

pod7-5020-51(config-vlan)# interface ethernet 1/3 pod7-5020-51(config-if)# switchport mode trunk pod7-5020-51(config-if)# switchport trunk allowed vlan 1,10 pod7-5020-51(config-if)# spanning-tree port type edge trunk

Warning: Edge port type (portfast) should only be enabled on ports connected to a single host. Connecting hubs, concentrators, switches, bridges, etc... to this interface when edge port type (portfast) is enabled, can cause temporary bridging loops. Use with CAUTION

pod7-5020-51(config-if)# interface vfc3 pod7-5020-51(config-if)# bind interface ethernet 1/3 pod7-5020-51(config-if)# vsan database pod7-5020-51(config-vsan-db)# vsan 10 interface vfc3 pod7-5020-51(config-vsan-db)# interface vfc3 pod7-5020-51(config-if)# no shut

Configure FCoE on NPV Edge Switch

Page 135: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

pod7-5020-51# sh npv flogi-table -------------------------------------------------------------------------------- SERVER EXTERNAL INTERFACE VSAN FCID PORT NAME NODE NAME INTERFACE -------------------------------------------------------------------------------- vfc3 10 0x0f0100 21:00:00:c0:dd:12:04:f3 20:00:00:c0:dd:12:04:f3 Spo1

Total number of flogi = 1.

Verify that the VFC interface is pinned to the SAN Port Channel

pod7-5020-51# sh npv traffic-usage

NPV Traffic Usage Information: ---------------------------------------- Server-If External-If

---------------------------------------- vfc3 san-port-channel 1 ----------------------------------------

Verify NPV Fabric Connectivity (Edge)

Page 136: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

pod3-9216i-70(config)# show flogi database

--------------------------------------------------------------------------------

INTERFACE VSAN FCID PORT NAME NODE NAME

--------------------------------------------------------------------------------

fc1/12 10 0x0f00dc 21:00:00:20:37:a9:cd:6e 20:00:00:20:37:a9:cd:6e

fc1/12 10 0x0f00e0 21:00:00:20:37:a9:89:7e 20:00:00:20:37:a9:89:7e

fc1/12 10 0x0f00e2 21:00:00:20:37:af:de:85 20:00:00:20:37:af:de:85

fc1/12 10 0x0f00e4 21:00:00:20:37:a9:d6:49 20:00:00:20:37:a9:d6:49

fc1/12 10 0x0f00e8 21:00:00:20:37:a9:d7:bf 20:00:00:20:37:a9:d7:bf

fc1/12 10 0x0f00ef 21:00:00:20:37:a9:94:89 20:00:00:20:37:a9:94:89

port-channel 1 1 0x670102 24:01:00:0d:ec:a3:da:40 20:01:00:0d:ec:a3:da:41

port-channel 1 10 0x0f0100 21:00:00:c0:dd:12:04:f3 20:00:00:c0:dd:12:04:f3

Total number of flogi = 8.

Verify on Core that N5K and FCoE Workstation Are Logged into the Fabric.

Page 137: SAN Design Cisco

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKSAN-1121

•  NPV Core Switch:

pod3-9216i-70(config)# zone name npv_vsan10 vsan 10 pod3-9216i-70(config-zone)# member pwwn 21:00:00:20:37:a9:cd:6e pod3-9216i-70(config-zone)# member pwwn 21:00:00:20:37:a9:89:7e pod3-9216i-70(config-zone)# member pwwn 21:00:00:20:37:af:de:85 pod3-9216i-70(config-zone)# member pwwn 21:00:00:c0:dd:12:04:f3 pod3-9216i-70(config-zone)# exit pod3-9216i-70(config)# zoneset name npv_v10_zs vsan 10 pod3-9216i-70(config-zoneset)# member npv_vsan10 pod3-9216i-70(config-zoneset)# zoneset activate name npv_v10_zs vsan 10 Zoneset activation initiated. check zone status

pod3-9216i-70(config)# show zoneset active zoneset name npv_v10_zs vsan 10 zone name npv_vsan10 vsan 10 * fcid 0x0f00dc [pwwn 21:00:00:20:37:a9:cd:6e] * fcid 0x0f00e0 [pwwn 21:00:00:20:37:a9:89:7e] * fcid 0x0f00e2 [pwwn 21:00:00:20:37:af:de:85] * fcid 0x0f0100 [pwwn 21:00:00:c0:dd:12:04:f3]

Configure Zoning