Power PCIe Adapters 20141211

69
Cha

description

Power PCIe

Transcript of Power PCIe Adapters 20141211

Page 1: Power PCIe Adapters 20141211

Change Log of this document

Page 2: Power PCIe Adapters 20141211

Power PCIe SpreadsheetLast Updated 12 December 2014. Added support for October 6th announcement

(suggestions for change/additions which include the material for the change/addition will be appreciated)

Background colors in the PCIe Adapters Worksheet are to help separate the types of information

Orange = Slots in which the adapters is supportedBlue = PCIe adapter/slot sizing information … also see Performance Sizer worksheet / tabYellow = Minimum OS level supported

Rose = PowerHA support

POWER8 and POWER7 servers (rack/tower) are considered in this documentation. Blade Center and PureFlex adapters are not included. PCIe Gen1/Gen2 Power System slot insights

- Gen 1 slots are provided in the #5802/5803/5873/5877 12X I/O drawers - Gen 1 slots are provided in POWER6 servers

The information in this spread sheet is provided "as is". Though the authors have made a best effort to provide an accurate and complete listing of PCIe adapters available on IBM Power Systems, the user should also use other tools such as configurators, other documentation or IBM web sites to confirm specific points of importance to you.

This document will be stored as a tech doc and it is the authors' intent to refresh it over time. Therefore if you are using a version stored on your PC, please occasionally check to see if a newer version is available.

IBMers: http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105846 Partners: http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD105846

If you find an error or have a suggestion for change/addition, please send Mark Olson ([email protected]) and Sue Baker ([email protected]) an email.

Green = Adapter specifications (column D visible. Column E, F, G, H and I content mostly included in column D, but provided for add'l sorting

OS levels shown in this spread sheet are minimum versions and revisions required by the adapter. Technology Level, Technology Refresh, Service Pack PTF, machine code level, etc. requirements can be found at the following URL:https://www-912.ibm.com/e_dir/eServerPrereq.nsf

- Gen3 slots are provided in the POWER8 system units. They are either x8 or x16 slots. There are two x16 slots per socket and the remaining slots are x8. x16 slots have additional physical connections compared to x8 slots which allow twice the bandwidth (assuming there are adpaters with corresponding x16 capabilities). x8 and x4 and x1 cards can physically fit in x16 or x8 slots. 7-11-6-9 is a good thing to memorize for POWER8 Scale-out servers. A 4U 1-socket server has 7 PCIe slots. A 4U 2-socket server has 11 PCIe slots. A 2U 1-socket server has 6 PCIe slots. A 2U 2-socket server has 9 PCIe slots.

- Gen2 slots are provided in the Power 710/720/730/740/770/780 "C" model system units and in the Power 710/720/730/740/750/760 770/780 "D" model system units. Their machine type model numbers are 8231-E1C (710), 8202-E4C (720), 8231-E2C (730), 8205-E6C (740), 9117-MMC (770), 9179-MHC (780) and 8231-E1D (710), 8202-E4D (720), 8231-E2D (730), 8205-E6D (740), 8408-E8D (750), 9109-RMD (760), 9117-MMD (770), 9179-MHD (780)- Optional Gen2 slots are provided via the PCIe Riser Card (Gen2), FC = #5685, on the Power 720 and Power 740.

- Gen 1 slots are provided in the initially introduced Power 710/720/730/740/750/755/770/780 system units. Their machine type model numbers are 8231-E2B (710), 8202-E4B (720), 8231-E2B (730), 8205-E6B (740), 8233-E8B (750), 8236-E8C (755), 9117-MMB (770), 9179-MHB (780)- Optional Gen1 slots are provided via the PCIe Riser Card (Gen2), FC = #5610, on the Power 720 and Power 740.

Page 3: Power PCIe Adapters 20141211

Form Factor:FH = full heightLP = low profile

.FH/LP Power Systems slot insights

LP slots are found in the Power 710/730 and in the optional 720/740 PCIe Riser card (#5610/5685) and in the PowerLinux 7R1/7R2FH slots are found in the Power 720/740/750/755/760/770/780 and in POWER6 servers and in the #5802/5803/5873/5877 I/O drawers

All Gen1 adapters are supported in any Gen2 slot (subject to any specific server rules, OS/firmware support, physical size card & slot)

Blind Swap Cassettes (BSC) for PCIe adaptersPower 770/780 system unit, the POWER7+ 750/760 have Gen4 BSCOther POWER7/POWER7+ system units do not have BSC

The 12X-attached I/O drawers #5802/5877/5803/5873 have Gen3 BSC

AIX users can use the lscfg command to determine an existing adapter's CCIN (Customer Card ID Number). IBM i users can use the STRSST command to determine an existing adapter's CCIN

Order the right form factor (size) adapter for the PCI slot in which it will be placed. No ordering/shipping structure has been announced by IBM Power Systems to order a conversion. So even though many of the PCIe adapters could be converted by changing the tail stock at the end of adapter to be taller (full high) or shorter (low profile), no way to do so has been announced. If demand for such a capability emerges, a method to satisfy it will be considered. Note that tailstocks are unique to each adapter.

Most Gen2 cards are supported only in Gen2 or Gen3 slots. The #5913 and ESA3 large cache SAS adapters and the #5899/5260 4-port 1Gb Ethernet are exceptions to this generalization. Also #EC27/EC28/EC29/EC30 have some limited Gen1 slot support.

"Down generation": If you place a Gen2 card in a Gen1 slot it will physically fit and may even work. But these adapters weren't fully tested in Gen1 slots and thus there is no support statement for this usage. Also be VERY careful of performance expectations of a high bandwidth Gen2 card in a Gen1 slot. For example, a 4-port 8Gb Fibre Channel adapter has about 2X the potential bandwidth than a Gen1 slot can provide.

Most Gen3 cards are supported only in Gen2 or Gen3 slots. The #EJ0L large cache SAS adapters and #EJ0J/EJ0M SAS adapters are exceptions to this generalization. The same caveat about putting unsupported adapters in a "down generation" PCIe slot Gen1 or

Feb 2013 note: The POWER7+ 710/720/730/740 are at a higher firmware level than the previous POWER7 710/720/730/740. Though the "C" model 710/720/730/740 have Gen2 slots, the new Gen2 adapters EN0A/B/H/J are not be tested/supported back on the "C" models. All Gen2 adapters supported on the "C" models are also supported on the "D" models. There is currently no plan to introduce higher firmware levels to the Power 710/720/730/740 "C" models.

CCIN can be used to find an ordering feature code of an installed adapter. The ordering feature code is used to find the feature code name, price, description, etc

This document was developed for products and/or services offered in the United States. IBM may not offer the products, features or services discussed in this document in other countries.

The information may be subject to change without notice. Consult your local IBM business contact for information on the products, features and services available in your area.

IBM, the IBM logo, AIX, BladeCenter, Power, POWER, POWER6, POWER6+, POWER7, POWER7+, PowerLinux, Power Systems and Power Systems Software are trademarks or registered trademarks of International Business Machines Corporation in the United States or other countries or both. A full list of U.S. trademarks owned by IBM may be found at ibm.com/legal/copytrade.shtml

Page 4: Power PCIe Adapters 20141211

Linux is a registered trademark of Linux Torvalds in the United States, other countries or both.

Change Log of this documentOriginally distributed Oct 2011 April 2012 Added April announce content18 May 2012 …. Added PCIe performance sizing factors and sizing tool

11 June 2012 Added PowerHA support columns X & Y . Filled in more of the sorting column E, F, G, H, I values.13 July 2012 Expanded PowerLinux tab to include two new adapter features and several full-high adapters for #5802/5877 drawers22 July 2012 Added Linux RHEL6.1 support of RoCE adapter1 Oct 2012 Added additional RoCe adapters and support

5 AugMay 2013 added information for PowerLinux 7R4 which announced 30 July.Oct 1 2013. Added IBM Flash Adapter 90, PCIe2 SAS adapter refresh, Solarflare adapters.

30 May 2014. Added expanded support of existing SAS adapters Added 4-port Ethernet card with copper twinax6 June 2014. Added POWER8 PCIe adapters.8 July 2014. Added support for July 15th announcement12 December 2014. Added support for October 6th announcement

29 May 2012 …. Added PowerLinux Worksheet/Tab, updated/augmented some text in Performance Sizer Tab, added more sorting columns (E, F, G, H, I) for adapter descriptions.

5 Feb 2013 Added EN0H/EN0J, EN0A/EN0B, updated misc points, updated PowerLinux content. Added a new worksheet/tab shich provides detailed technical information about the Ethernet port attributes.

28 May 2013 Added #EL39/EL3A adapters from SolarFlare to PowerLinux tab, updated #EN0H/EN0J/EN0A/EN0B adapters to indicate support on 770/780 servers.. updated #EN0H/EN0J to show IBM i support of FCoE starting July 2013, updated #EC27/EC28/EC29/EC30 to indicate support in #5802/5877 drawers. Updated several adpaters to show PowerHA support. also updated web addres where tech doc is stored

14 Jan 2014. Added PCIe Gen3 SAS adapters, Expanded support of ESA3 SAS adapter, added IBM i Bisync adapter, expanded IBM Flash Adapter 90 support for 7R2 and RHEL. Added new source manufacturer tab/worksheet.

Page 5: Power PCIe Adapters 20141211

Adapter General Information

Feature Name

5735 8 Gigabit PCI Express Dual Port Fibre Channel Adapter

Feature Code

Page 6: Power PCIe Adapters 20141211

Adapter General Information

Adapter Description Cable # ports

Fibre Channel 8Gb 2-port FC 8 Gbit/sSR optical 2-port

Medium

Port bandwidth

Page 7: Power PCIe Adapters 20141211

Adapter General Information Supported Slots for Adapters POWER8

SRIOV Other CCIN

No 577D FH Gen1 5273 S814, S824

Form Factor

Gen1 / Gen2

Functionally Equiv

Features

System Node (CEC) Card Slot Support

Page 8: Power PCIe Adapters 20141211

Supported Slots for Adapters Performance Info POWER8 POWER7

Y Both 16

I/O drawer Support

Gen1 / Gen2 / Both

POWER 7/7+ CEC Card Slot Support (see note S0 below)

POWER 7/7+ I/O Drawer

Slot Support

Slot Comments

Sizing Factor

720, 740, 750, 755, 760+, 770, 780, 520, 550, 560

5802, 5877, 5803, 5873

Page 9: Power PCIe Adapters 20141211

Performance Info OS Support PowerHA Support

Comment Comments

Note P9 5.3 6.1 5.5 10 NPIV requires VIOS Yes

Min AIX Level

Min IBM i Level

Min RHEL Level

Min SLES Level

For AIX

Page 10: Power PCIe Adapters 20141211

PowerHA Support

comments

Yes

For IBM i

Page 11: Power PCIe Adapters 20141211

This worksheet / Tab is for use with the IBM PowerLinux servers

Adapter support is also documented at URL:

The marketing model name for 2U servers is 7R1 or 7R2 … ordering system is machine type 82467R1 ordering models (one socket) include 8246-L1C/L1S/L1D/L1T and 7R2 ordering models are 8246-L2C/L2S/L2D/L2TThe "C" and "D" models don't support attaching I/O drawers (12X or disk/SSD-only). The "S" and "T" models support I/O drawers. The 7R2 (model L2S/L2T only) can attach one or two 12X PCIe I/O drawers. Each drawer has full high PCIe slots

The marketing model name for the 4U server is 7R4 …. Ordering system is machine type 8248The 7R4 is a four socket with an ordering model of L4T. It is a 4U server with full high PCIe slots. It can attach up to four 12X PCIe I/O drawers also with full high full high PCIe slots. The 7R4 does not support low profile (LP) adapters.

Adapter General Information

Feature Name Adapter Description

2053 PCIe LP RAID & SSD SAS Adapter 3Gb SAS-SSD Adapter - Double-wide- 4 SSD mod

2055 PCIe LP RAID & SSD SAS Adapter 3Gb w/ Blin SAS-SSD Adapter - Double-wide- 4 SSD mod

2728 4 port USB PCIe Adapter USB 4-port

5260 PCIe2 LP 4-port 1GbE Adapter Ethernet 1Gb 4-port - TX / UTP / copper

5269 PCIe LP POWER GXT145 Graphics AcceleratorGraphics - POWER GXT145

5270 PCIe LP 10Gb FCoE 2-port Adapter FCoE (CNA) 2-port 10Gb - optical

5271 PCIe LP 4-Port 10/100/1000 Base-TX Ethernet Ethernet 1Gb 4-port - TX / UTP / copper

5272 PCIe LP 10GbE CX4 1-port Adapter Ethernet 10Gb 1-port - copper CX45273 PCIe LP 8Gb 2-Port Fibre Channel Adapter Fibre Channel 8Gb 2-port

5274 PCIe LP 2-Port 1GbE SX Adapter Ethernet 1Gb 2-port - optical SX

http://www-01.ibm.com/support/knowledgecenter/8247-21L/p8eab/p8eab_83x_8rx_supported_pci.htm?lang=en

Feature Code

Page 12: Power PCIe Adapters 20141211

5275 PCIe LP 10GbE SR 1-port Adapter Ethernet 10Gb 1-port - optical SR5276 4 Gbps PCIe Dual-port fibre channel adapter Fibre Channel 4Gb 2-port

5277 PCIe LP 4-Port Async EIA-232 Adapte Async 4-port EIA-232

5279 PCIe2 LP 4-Port 10GbE&1GbE SFP+ Copper& Ethernet 10Gb+1Gb 4Ports: 2x10Gb copp

5280 PCIe2 LP 4-Port 10GbE&1GbE SR&RJ45 AdaptEthernet 10Gb+1Gb 4Ports 2x10Gb optic5281 PCIe LP 2-Port 1GbE TX Adapter Ethernet 1Gb 2-port - TX / UTP / copper

5283 PCIe2 LP 2-Port 4X IB QDR Adapter 40Gb QDR 2-port 4X IB - 40Gb5284 PCIe2 LP 2-port 10GbE SR Adapter Ethernet 10Gb 2-port - optical SR

5286 PCIe2 LP 2-Port 10GbE SFP+ Copper Adapter Ethernet 10Gb 2-port SFP+ - copper twinax

5289 2 Port Async EIA-232 PCIe Adapter Async 2-port EIA-232

5290 PCIe LP 2-Port Async EIA-232 Adapter Async 2-port EIA-232

5708 10Gb FCoE PCIe Dual Port Adapter FCoE (CNA) 2-port 10Gb - optical

5717 4-Port 10/100/1000 Base-TX PCI Express AdaptEthernet 1Gb 4-port - TX / UTP / copper

5732 10 Gigabit Ethernet-CX4 PCI Express Adapter Ethernet 10Gb 1-port - copper CX4

5735 8 Gigabit PCI Express Dual Port Fibre Channel Fibre Channel 8Gb 2-port

5748 POWER GXT145 PCI Express Graphics AccelerGraphics - POWER GXT145

5767 2-Port 10/100/1000 Base-TX Ethernet PCI Expr Ethernet 1Gb 2-port - TX / UTP / copper

5768 2-Port Gigabit Ethernet-SX PCI Express Adapte Ethernet 1Gb 2-port - optical SX

5769 10 Gigabit Ethernet-SR PCI Express Adapter Ethernet 10Gb 1-port - optical SR

5772 10 Gigabit Ethernet-LR PCI Express Adapter Ethernet 10Gb 1-port - optical LR

5774 4 Gigabit PCI Express Dual Port Fibre Channel Fibre Channel 4Gb 2-port

5785 4 Port Async EIA-232 PCIe Adapter Async 4-port EIA-232

Page 13: Power PCIe Adapters 20141211

5805 PCIe 380MB Cache Dual - x4 3Gb SAS RAID AdSAS 380MB cache, RAID5/6, 2-port 3Gb

5899 PCIe2 4-port 1GbE Adapter Ethernet 1Gb 4-port - TX / UTP / copper

5901 PCIe Dual-x4 SAS Adapter SAS 0GB Cache, no RAID5/6 2-port 3Gb

5913 PCIe2 1.8GB Cache RAID SAS Adapter Tri-portSAS 1.8GB Cache, RAID5/6 - 3-port, 6GbEC27 PCIe2 LP 2-Port 10GbE RoCE SFP+ Adapter Ethernet 10Gb 2-port RoCE SFP+ copper tw

EC3A

EC41 PCIe2 LP 3D Graphics Adapter x1 Graphics - 3DEC45 PCIe2 LP 4-Port USB 3.0 Adapter USB 4-portEJ16 PCIe3 LP CAPI Accelerator Adapter CAPI

EL09 PCIe LP 4Gb 2-Port Fibre Channel Adapter Fibre Channel 4Gb 2-port

EL10 PCIe LP 2-x4-port SAS Adapter 3Gb SAS 0GB Cache, no RAID5/6 2-port 3Gb

EL11 PCIe2 LP 4-port 1GbE Adapter Ethernet 1Gb 4-port - TX / UTP / copper

EL27 PCIe2 LP 2-Port 10GbE RoCE SFP+ Adapter Ethernet 10Gb 2-port RoCE - copper twinax

EL2K PCIe2 LP RAID SAS Adapter Dual-port 6Gb SAS 0GB Cache, RAID5/6 - 2-port, 6GbEL2M PCIe LP 2-Port 1GbE TX Adapter Ethernet 1Gb 2-port - TX / UTP / copper

EL2N PCIe LP 8Gb 2-Port Fibre Channel Adapter Fibre Channel 8Gb 2-port

EL2P PCIe2 LP 2-port 10GbE SR Adapter Ethernet 10Gb 2-port - optical SR

EL2Z PCIe2 LP 2-Port 10GbE RoCE SR Adapter Ethernet 10Gb 2-port RoCE - optical SR

EL38 PCIe2 LP 4-port (10Gb FCoE & 1GbE) SR&RJ4

EL39 PCIe2 LP 2-port 10GbE SFN6122F Adapter Ethernet 10Gb 2-port OpenOnLoad SolarFla

EL3A PCIe2 LP 2-port 10GbE SFN5162F Adapter Ethernet 10Gb 2-port SolarFlare SFP+ coppeEL3B PCIe3 RAID SAS Adapter Quad-port 6Gb SAS 0GB Cache, RAID5/6 - 4-port, 6GbEL3C PCIe2 LP 4-port (10Gb FCoE and 1GbE) CoppeEthernet 10Gb+1Gb 4Ports: 2x10Gb coppEL3Z PCIe2 LP 2-port 10 GbE BaseT RJ45 Adapter Ethernet 10Gb 4Ports: RJ45

PCIe3 LP 2-Port 40 GbE NIC RoCE QSFP+ Adapter

Ethernet 40Gb 2-port RoCE SFP+ copper twinax

CNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb optical SR & 2x1Gb UTP copper RJ45

Page 14: Power PCIe Adapters 20141211

EL60 PCIe3 LP 4 x8 SAS Port Adapter SAS 0GB Cache, RAID5/6 - 4-port, 6Gb

EN0B PCIe2 LP 16Gb 2-port Fibre Channel Adapter Fibre Channel 16Gb 2-portEN0L PCIe2 LP 4-port (10Gb FCoE & 1GbE) SFP+Co Ethernet 10Gb+1Gb CNA 4Ports 2x10Gb CoEN0N PCIe2 4-port(10Gb FCoE & 1GbE) LR&RJ45 AdEthernet 10Gb+1Gb CNA 4Ports 2x10Gb opEN0T PCIe2 4-Port (10Gb+1GbE) SR+RJ45 Adapter Ethernet 10Gb+1Gb CNA 4Ports 2x10Gb opEN0V PCIe2 4-port (10Gb+1GbE) Copper SFP+RJ45 Ethernet 10Gb+1Gb CNA 4Ports 2x10Gb co

EN0Y PCIe2 LP 8Gb 4-port Fibre Channel Adapter Fibre Channel 8Gb 4-portEN28 2 Port Async EIA-232 PCIe Adapter Async 2-port EIA-232

ES09 IBM Flash Adapter 90 (PCIe2 0.9TB) Flash memory adapter 900GB

ESA1 PCIe2 RAID SAS Adapter Dual-port 6Gb SAS 0GB Cache, RAID5/6 - 2-port, 6Gb

Page 15: Power PCIe Adapters 20141211

The marketing model name for 2U servers is 7R1 or 7R2 … ordering system is machine type 82467R1 ordering models (one socket) include 8246-L1C/L1S/L1D/L1T and 7R2 ordering models are 8246-L2C/L2S/L2D/L2TThe "C" and "D" models don't support attaching I/O drawers (12X or disk/SSD-only). The "S" and "T" models support I/O drawers. The 7R2 (model L2S/L2T only) can attach one or two 12X PCIe I/O drawers. Each drawer has full high PCIe slots

The marketing model name for the 4U server is 7R4 …. Ordering system is machine type 8248The 7R4 is a four socket with an ordering model of L4T. It is a 4U server with full high PCIe slots. It can attach up to four 12X PCIe I/O drawers also with full high full high PCIe slots. The 7R4 does not support low profile (LP) adapters.

Adapter General Information Slots

POWER8 POWER7

CCIN

57CD LP Gen1 NA NA

57CD FH Gen1 NA Yes

57D1 FH Gen1 NA Yes

576F LP Gen2 S812L, S822L NA NA

5269 LP Gen1 S812L, S822L NA NA

2B3B LP Gen1 S812L, S822L NA NA

5271 LP Gen1 S812L, S822L NA NA

5272 LP Gen1 NA NA577D LP Gen1 L2C, L2S NA NA

5768 LP Gen1 S821L, S822L NA NA

http://www-01.ibm.com/support/knowledgecenter/8247-21L/p8eab/p8eab_83x_8rx_supported_pci.htm?lang=en

Form Factor

Gen1 / Gen2

CEC Card Slot Support

7R1 & 7R2 CEC Card Slot Support

7R4 CEC Card Slot support

Drawer Slot Support

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

Page 16: Power PCIe Adapters 20141211

5275 LP Gen1 S821L, S822L NA NA5774 LP Gen1 S812L, S822L

57D2 LP Gen1 S821L, S822L NA NA

2B43 LP Gen2 NA NA

2B44 LP Gen2 S821L, S822L NA NA5767 LP Gen1 S821L, S822L L2C, L2S NA NA

58E2 LP Gen2 S821L, S822L NA NA5287 LP Gen2 S821L, S822L L2C, L2S NA NA

5288 LP Gen2 NA NA

57D4 FH Gen1 NA Yes

57D4 LP Gen1 NA NA

2B3B FH Gen1 NA Yes

5271 FH Gen1 NA Yes

5732 FH Gen1 NA Yes

577D FH Gen1 NA Yes

5748 FH Gen1 NA Yes

5767 FH Gen1 NA Yes

5768 FH Gen1 NA Yes

5769 FH Gen1 NA Yes

576E FH Gen1 NA Yes

5774 FH Gen1 NA Yes

57D2 FH Gen1 NA NA Yes

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

5802, 5877, EL36, EL37

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

Page 17: Power PCIe Adapters 20141211

574E FH Gen1 NA Yes

576F FH Gen2 NA Yes

57B3 FH Gen1 NA Yes

57B5 FH Gen2 NA YesEC27 LP Gen2 NA NA NA

57BD LP Gen3 S812L, S822L

LP Gen2 S812L, S822L58F9 LP Gen2 S812L, S822LEJ16 LP Gen3 S812L, S822L

5774 LP Gen1 S812L, S822L NA NA

57B3 LP Gen1 S812L, S822L NA NA

576F LP Gen2 NA NA

EC27 LP Gen2 S812L, S822L NA NA

57B4 LP Gen2 NA NA5767 LP Gen1 L2S NA NA

577D LP Gen1 S812L, S822L NA NA

5287 LP Gen2 S812L, S822L NA NA

EC29 LP Gen2 S812L, S822L NA NA

2B93 LP Gen2 S812L, S822L NA NA

na LP Gen2 S812L, S822L NA NA

na LP Gen2 NA NA57B4 LP Gen3 S812L, S822L L1D, L2D Yes2CC1 LP Gen2 S812L, S822L2CC4 LP Gen2 S812L, S822L

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

L1D, L1T, L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1S, L2S, L1T, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

L1D, L1T, L2D, L2T

L1D, L2D, L1T, L2T

L1D, L2D, L1T, L2T

Page 18: Power PCIe Adapters 20141211

57B4 LP Gen3 S812L, S822L

577F LP Gen2 S812L, S822L NA NA2CC1 LP Gen2 S812L, S822L2CC0 LP Gen2 S812L, S822L2CC3 LP Gen2 S812L, S822L2CC3 LP Gen2 S812L, S822L

EN0Y LP Gen2 S812L, S822L NA NALP Gen1 S812L, S822L

578A FH Gen2 NA NA

57B4 FH Gen2 NA Yes

L1D, L1T, L2D, L2T

L1C, L1S, L2C, L2S, L1D, L1T,

L2D, L2T

5802, 5877, EL36, EL37

5802, 5877, EL36, EL37

Page 19: Power PCIe Adapters 20141211

The 7R4 is a four socket with an ordering model of L4T. It is a 4U server with full high PCIe slots. It can attach up to four 12X PCIe I/O drawers also with full high full high PCIe slots. The 7R4 does not support low profile (LP) adapters.

Slots Sizing Software Support POWER7

Sizing Factor Comments

3 to 6 5.5 10

3 to 6 5.5 10 NPIV requires VIOS

0.1 to 1 5.5 10 No network install capability.

0.4 to 4 5.8 11

0.1 5.5 10

20 5.5 10

0.4 to 4 5.5 10

5 to 10 5.5 1016 5.5 10

0.2 to 2 5.5 10

Drawer comments

Min RHEL Level

Min SLES Level

L2S, L2T, L4T only

L2S, L2T, L4T only

Page 20: Power PCIe Adapters 20141211

5 to 10 5.5 10

0.04 5.5 10

10 to 20 5.7 10

10 to 20 5.7 100.2 to 2 5.5 10

30 5.6 1010 to 20 5.7 10

10 to 20 5.7 10

0.02 5.7 10 Linux Network Install not supported

0.02 5.7 10 Linux Network Install not supported

20 5.5 10

0.4 to 4 5.5 10

5 to 10 5.5 10

16 5.5 10

0.1 5.5 10

0.2 to 2 5.5 10

0.2 to 2 5.5 10

5 to 10 5.5 10

5 to 10 5.5 10

8 5.5 10

0.04 5.5 10

withdrawn from marketing - use EL2P instead

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

Page 21: Power PCIe Adapters 20141211

2 to 6 5.5 10

0.4 to 4 5.8 11

1 to 6 5.5 10

15 to 30 5.7 10 NPIV requires VIOS20 6.3 11

7 no NPIV

8 5.5 10 no NPIV

1 to 6 5.5 10 NPIV requires VIOS

0.4 to 4 5.8 11 NPIV requires VIOS, withdrawn from marketing, use EL2N instead

20 6.3 11 NPIV requires VIOS

15 to 30 5.8 10 NPIV requires VIOS0.2 to 2 5.5 10

16 5.5 10

10 to 20 5.7 10

20 6.3 11

20 NA as of Feb 11

20 6.4 n/a

20 6.4 n/a

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

L2S, L2T, L4T only

Page 22: Power PCIe Adapters 20141211

30 NA as of Feb 11

30 5.8 10 SSD modules withdrawn from marketing in Aug 2013, SSD modules withdrawn from marketing in Aug 2013,

L2T only 6 to 11 6.5 NA SSD modules withdrawn from marketing in Aug 2013,

15 to 30 5.8 10L2S, L2T, L4T

only

Page 23: Power PCIe Adapters 20141211
Page 24: Power PCIe Adapters 20141211

NPIV requires VIOS, withdrawn from marketing, use EL2N instead

Page 25: Power PCIe Adapters 20141211

SSD modules withdrawn from marketing in Aug 2013, SSD modules withdrawn from marketing in Aug 2013,

SSD modules withdrawn from marketing in Aug 2013,

Page 26: Power PCIe Adapters 20141211

some selected cable comments …. Ths does not cover all types of adapters or cables.

Twinax Ethernet cable comments

Optical SFP+ cables for Ethernet

SFP+ (Small Form Factor Pluggable Plus)

10Gb cables

QDR 4X Infiniband cable comments

ASYNC Adapter cable comments

Graphic adapter cable comments

Page 27: Power PCIe Adapters 20141211

SAS adapter cable comments

Page 28: Power PCIe Adapters 20141211

some selected cable comments …. Ths does not cover all types of adapters or cables.

Twinax Ethernet cable comments

Optical SFP+ cables for Ethernet

SFP+ (Small Form Factor Pluggable Plus)

10Gb cablesSee TechDoc TD106020 for some good insights:

QDR 4X Infiniband cable comments

.

ASYNC Adapter cable comments

Graphic adapter cable commentsPCIe Graphics cards: low profile #5269 has one port on card and comes with a short fan-out cable which yields two DVI portsPCIe Graphics cards: full high #5748 has two DVI ports on card note the older PCI (not PCIe) graphics card has one DVI port and one VGA portyou can connect a VGA cable to a DVI port via a dongle converter cable … feat #4276 28 pin D-Shell DVI plug to/from 15 pin D-shell VGA

(Ethernet Cables (twinax) for Power Copper SFP+ ports are active cables. Passive cables not tested/supported. Feat codes #EN01, #EN02 or #EN03 (1m, 3m, 5m) are tested/supported. Note these twinax cables are NOT CX4 or 10 GBASE-T or AS/400 5250 twinax. Transceivers are located at each end of the #EN01/02/03 cables and are shipped with each cable..

Active cables will work in all cases. Passive has potential risk of not working in some cases, but is lower cost. To simplify/confusion/error/support/debug issues, Power Systems specifically designates active cabling as the only support SFP+ copper twinax cable option. Use passive at your own risk and realize it is an unsupported and untested configuration from Power Systems perspective.

These cables are the same as for 8 Gb Fibre Channel cabling, but the distance considerations are not as tight. Power Systems Optical SFP+ ports include a transceiver in the PCIe adapter or Integrated Multifunction Card..

IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106020BPs: http://www-03.ibm.com/partnerworld/partnerinfo/src/atsmastr.nsf/WebIndex/TD106020

QDR cables for the #5283 and #5285 adapters which have been tested and are supported are the Copper 1m, 3m, 5m (#3287, #3288, and 3289) and Optical 10m, 30m (#3290, #3293). Both the copper and optical cables come with the QSFP+ transceivers permanently attached at each end. The adapters just have an empty QSFP+ cage

The 4-port Async adapter (#5277 and #5285) has one physical port on the card, but comes with a short fan-out cable which has 4 EIA-232 ports. The fan-out cable does not have its own feature code.

The 2-port Async adapter (#5289 and #5290 has two physical ports on the card which are RJ45 connectors. Feature code #3930 cables are to convert from an RJ45 connector to a 9-pin (DB9) Dshell connector. These converter cables are fairly generic and can be found at many electrical stores. This same #3930 is used on the Power 710/720/730/740 system unit to convert the serial/system ports to a DB9 connector. Order one converter cable for each of the two ports you plan on using

Page 29: Power PCIe Adapters 20141211

SAS adapter cable commentsThe InfoCenter generally has very good information about cabling SAS adapters

AE or YE cables attach to tape drivesAA cables connect between two PCIe2/PCIe3 SAS adapters with write cache

PCIe Gen1 adapters have a wider connector called "Mini-SAS". PCIe Gen2 adapters need either a HD (High Density) Mini-SAS connector or an HD Narrow Mini-SAS connector , PCIe Gen3 adapters need a HD Narrow Mini-SAS connector.

YO cables - the "tail" or "bottom of the y" connects to the adapter SAS port. The two connectors on the "top of the y" connect to an I/O drawer to provide redundant paths into the SAS expanders.

X cables -- the two "bottoms of the x" connect to two diiferent adapters' ports. The two "tops of the x" connect to an I/O drawer. This provides redundant paths into the SAS expanders and into redundant SAS adapters.

The new Gen3 SAS adapters have four SAS ports which are VERY close together and require a more narrow Mini-SAS HD connector. "Narrow" cable feature codes equivalent to the PCIe Gen2 SAS adapter cables in terms of length and electircal characterisitcs are announced with the PCIe Gen3 SAS adapters. The Narrow cables can be used on the PCIe Gen2 SAS adapters. The earlier "fatter" Mini--SAS HD cables are NOT supported on the new PCIe Gen3 SAS adapters since the connectors would jam together and potentially damage the ports on the back of the PCIe Gen3 card.

Page 30: Power PCIe Adapters 20141211

you can connect a VGA cable to a DVI port via a dongle converter cable … feat #4276 28 pin D-Shell DVI plug to/from 15 pin D-shell VGA

Page 31: Power PCIe Adapters 20141211

I/O Performance Limits Estimator Tool

Strongly suggest reading all the intro material before trying to use the tables

Table of content Overview/purpose of the tool Using /interpreting / adjusting sizing factors Sizing factors for Integrated I/O, 12-X drawers, Ultra Drawer, 4X IB GX adapter The tables Optionally read: sharing bandwidth by PCIe slots

OVERVIEW / PURPOSE OF THE TOOL

This tool is intended as a rough rule of thumb to determine when you are approaching the IO limits of a system or IO drawer.

These sizing factors are actually rough Gb/s values. So a sizing value of 8 equates to 8Gb/s.

HOWEVER there are many variables and even more combinations of these variables which can impact the actual resulting performance in a real environment. Thus always apply common sense and remember this is designed to be a quick and simple approach ... a rough sizer. This is not a heavy-weight tool full of precise measurements and interactions. Keep in mind the key planning performance words, "it depends" and "your mileage may vary".

You will be using the "Sizing Factor" for each PCIe adapter. These are found in the "PCIe Adapter" worksheet/tab of this spreadsheet. Look for the light blue columns.

For example, feature code #5769 (10Gb 1-port Ethernet adapter) it has a sizing factor of 5 to 10. If high utilization, use "10" for the value. If low usage, use a "5". Or pick a number "6" or "7" or "8" or "9" if you think it is somewhere in between.

Or another example, feature code #5735 (8Gb 2-port Fibre Channel Adapter) has a sizing factor of 16 assuming both ports are being used. But if only one port is used, then use an "8".

Totaling all the sizing factors for all the adapters will give you a good idea if you are coming close to a performance limit for your I/O. You will be looking at an overall system performance limit as well as "subsystem limits" or "subtotal". Subsystem limits are maximums per an individual GX bus, system unit, I/O drawer, PCIe Riser Card, etc. The maximum Gb/s for each subtotal/subsystem and the overall POWER7 system are found further down on this worksheet.

NOTE1: these values do NOT translate directly into MB/s numbers. This tool builds on the observation that most adapters are rarely run at 100% utilization in real client application environments. Thus to help avoid over configuring for most shops, the tool assigns max performance factors that are 70-75% of the the maximum throughput technically/theoretically possible.

NOTE2: If you are correlating these bandwidth maximums to the bandwidth values provided in marketing presentations and in the facts/features document, observe that there is a difference. The marketing materials are correct, but use peak bandwidth and assume full duplex connections. This sizing tool takes a more conservative approach which probably represents more typical usage by real clients. It assumes simplex connections and sustained workloads in its maximums.

For example. For simplicity, marketing materials provide a single performance value of 20Gb/s for GX adapters/slots. The details behind this are that a GX adapter or 12X Hub specifications are: BURST = 10 GB/s simplex & 20 GB/s duplex SUSTAINED = 5 GB/s DMA simplex & 8 GB/s duplex.

This tool uses Gb (bit) vs GB (byte) because there is a lot of variability between different protocols/adapters and how their GB ratings translate into Gb and vice versa.

Page 32: Power PCIe Adapters 20141211

Comments/insights on how to use, interpret and/or adjust sizing factors

So SAS adapter rough guides

#ESA1 / ESA2 runs only SSD so use 28 per adapter unless lightly loaded or unless not that many SSD. Pairs are optional.

~~ #2053 / 2054 / 2055 runs only SSD so use a 6 if all four SSD are present. These adapters are not paired (but can be mirrored).

Remember to ask those common sense questions. "Are these adapters being run at the same time?" For example if you have an 8G FC port that only goes to a tape that only runs one night a week when other adapters are not busy, you probably do not need to count it at all in the performance analysis. Or another good question, "Do you have redundant adapters or ports which are there only in case the primary adapter/port fails?" If you do, then you can probably ignore the redundant adapter/ports.

For Fibre Channel and for Ethernet ports, simplex line rates are assumed, not duplex. Based on observations over the years this is a good simplification. Many client workloads aren't hitting the point where these adapters' duplex capability is a factor in calculating throughput. If your environment is a real exception and you know you have a really heavy workload using duplex, then add another 20-25% to the sizing factor.

The actual real data rates in most applications are typically less than the line rate. This is especially true for 10Gb Ethernet where many applications run the adapter at less than 50% of line rates. If you know this is true for you, cut the sizing factor in half (think of it as a 5Gb Ethernet adapter)

Similarly, if you have an 8Gb or 16Gb Fibre Channel adapter and it is attached to a 4Gb switch, then treat it like a 4Gb adapter (cut the 8Gb sizing factor in half). Likewise if you have a 16Gb FC adapter on an 8Gb switch, treat it lke an 8Gb adapter.

For Fibre Channel over Ethernet Adapters (FCoE or CNAs) Treat these adapters based on the mix of work done. If using one port for Fibre Channel only, treat it just like the 8Gb Fibre Channel adapter port above. If using a port solely for NIC traffic, treat it like a 10Gb Ethernet port above. If a port has mixed FC and Ethernet workloads, assign a sizing factor based on the mix.

SAS Adapters Don't be surprised by SAS adapters max throughput numbers. A 2-port, 6Gb adapter is not 2 x 6 = 12Gb/s multiplication. You are over looking the insight that each port/connector is a "x4" connection. Four paths per connection yielding a 2 x 6 x 4 = 48Gb/s outcome. It's bigger than a 2-port 8Gb Fibre Channel card ( 2 x 8 = 16), but then you could go into duplex mode for FC and 2 x 16 = 32 ... so much closer to 48 of the SAS 2-port. Also remember most SAS adapters run in pairs so that makes the comparison more challenging. "It all depends".

Usually there are a lot of SAS ports for connectivity and the per port workload is lighter. And remember there can be a huge, huge difference between SSD and HDD.

Also note there is a big difference in typical workload whether there is a lot of data / database workload and something easy like AIX/Linux boot drives. Boot drives are a very light load.

Transaction workloads focus on IOPS (I/O Operations Per Second) and have lower data rates. Reduce the SAS sizing factor if HDD is running transaction workload. You would probably reduce the sizing factor even if SSD running transactions workloads, but SSD are so fast you might leave the value higher even if transaction workload. Alternatively, if you are doing data mining or engineering scientific, or handling large video files where there are large block transfers, then Gb/s are high and use the larger sizing factor for the SAS controller.

~~ #5901 / 5278 is a 6 if running lots of drives with large data blocks or is a 1 if just supporting boot drives. If configured as a pair, or as dual adapters, each adapter has that same sizing factor. So a sizing factor for a pair could be up to 12. Paris are optional.

~~ #5805 / 5903 is an 8 if running lots of drives with large data blocks (especially SSDs) or a 1 if just doing boot drives. These adapters always work as pairs, and each of the pair has this sizing factor. The total sizing factor for a pair could thus be up to 16 with large block workloads.

~~ #5913 or #ESA3 is a 30 if running lots of SSDs or a 16 if running lots of HDDs. These adapters always work as pairs, and each of the pair has this sizing factor. The total sizing factor for a pair could thus be up to 60 with lots of SSD.

Page 33: Power PCIe Adapters 20141211

note that "lots" of SSD can vary whether you are running the newer 387GB SSD which are faster and have higher throughput than the 177GB SSD.

Sizing factors for Integrated I/O, 12-X attached I/O drawers, EXP30 Ultra Drawer and 4X IB adapter

Integrated IO Controllers

12X- attached I/O Drawers with PCIe slots #5802/5877 and #5803/5873

#5685 PCIe Gen2 Riser Card in Power 720/740 ("B" or "C" models)This 4-slot expansion option plugs into the GX++ slot. It's max throughput is 60 Gb/s#5610 PCIe Gen1 Riser Card in Power 720/740 ("B" model only)This 4-slot expansion option plugs into the GX++ slot. It's max throughput is 40 Gb/s

NOTE: the table below assumes #5685 is used …. If #5610 is used, subtract 20 Gb/s from the system max and the riser card

Gen2/Gen1 note: The tool below ignores differences in Gen2 PCIe adapters in Gen1 PC slots. You will have to keep this in mind if using a #5913 or #ESA1 (SAS adapters) and assume a max of 15 vs 30. (remember IOPS is not usually related to bandwidth limitations.) Most of the time Gen2 adapters are only supported in Gen2 slots, so it is not commonly a consideration.

The 710 through 780 have integrated controllers for running HDD/SSD, DVD, tape, USB ports. The "B" models have IVE/HEA which are 10Gb or 1Gb Ethernet. You need to include these in the calculations. You might ignore the DVD and USB as they are generally very small numbers whose peaks (if any) happen at off hours. Likewise the 1Gb Ethernet is small. But If you have integrated 10G E or the Dual RAID SAS options you need to include them in the calculations. Treat the integrated 10Gb Ethernet adapter like a PCIe 10Gb Ethernet adapter with the same number of 10Gb ports.

For a rough guide for the for the integrated Dual RAID controllers: You may be up to a 10 for the integrated pair (not 10 each). The 10 would be if you have some SSD or a lot of HDDs doing large block I/O. Remember that you can attach an #5887 EXP24S to the SAS port on the back of the server and then this same pair of integrated SAS controllers is handling another 24 SAS bays. Increase the sizing factor closer to 10 if running lots of drives -- especially if using large block I/O.

If you are NOT using the "dual" integrated SAS adapter option and using a "single" integrated SAS adapter, use the same performance factor as a #5901 / 5278 SAS adapter with a performance sizing factor or 1-6. For the Power 770/780 you can have two of these single adapters, each with their own sizing factor. If a Power 710/720/730/740/750/755 you would have just one adapter with one sizing factor.

This tool focuses on the total bandwidth available to you. These I/O drawers attach to GX++ Adapters and it is that GX adapter which sets the maximum bandwidth available to all the adapters in that drawer. If you have two I/O drawers on that same GX++ adapter, the two drawers share that same total bandwidth of the GX++ adapter. Having two vs one #5802/5877 provides some redundancy and obviously provides more slots, but does NOT increase max bandwidth. Note that each #5803/5873 is logically two drawers and for higher bandwidth you would cable the drawer to two GX++ adapters.

For this estimator tool, one GX+ bus will support 40 Gb/s and a GX++ bus will support 60 Gb/s. (see Note2 above) GX+/++ buses can "internal" or they can be "external". External GX buses are accessed through a GX slot. All of the external GX slots are GX++ on POWER7 servers except for the first GX slot on the Power 750.

Note that some servers share a bus for internal as well as for GX slots. Thus you'll see below times when the total bandwidth is less than the sum of the parts.

Note the Power 750 has two different GX adapters. The #5609 is GX+ (40Gb) and #5616 is GX++ (60Gb). If a #5609 is placed in the second GX++ slot, the bandwidth is restricted to 40Gb/s.

Page 34: Power PCIe Adapters 20141211

#EDR1 or #5888 EXP30 Ultra SSD I/O Drawer

#5266 GX++ Dual-port 4x Channel Attach

The Tables

1- Look up the sizing factor for each of the adapters. Make sure you know in which PCI slot the adapter(s) will be used.

3- For the subgroups which have internal I/O, also add the sizing factor(s) for the internal I/O

For Power 710/730 … "B" models

The #EDR1 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "D" model 710/720/730/740/750/760/770/780 or a "C" model 770/780. .The #5888 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "C" model 710/720/730/740. Inside the Ultra Drawer are two very high performance SAS controllers, more powerful than the #5913/#EAS3. So assume a sizing factor value of 60 for this drawer if a lot of busy SSD. Note each SAS controller has a PCIe cable connecting it to the GX++ PCIe adapter, but the cables can be plugged into different GX++ PCIe adapters. Thus assume a sizing factor of 30 for each cable plugged into the GX slot.The GX++ PCIe adapters are the #EJ03 and #EJ0H.and #1914 #EJ03 plugs into a 720/740. #EJ0H plugs into a 710/730. #1914 plugs into a 750/760/770/780.

This specialized adapter is used only on the Power 710/730 "B" model 8231-E2B to connect to a 4x Infiniband switch in server clustering environments. (On 710/730 "C" models with PCIe Gen2 slots, the QDR PCIe adapter is used instead of this DDR option.) The adapter supports up to a max of 30Gb/s. Actual usage is HIGHLY dependent upon the clustering application. Use a sizing factor of 2 - 30 depending on the applications. A modest percentage of servers user this GX++ adapter.

How to use the tables below: (see also the example just below Power 710/730 B models and the example just below Power 720/740 C models.)

2- Add the maximum sizing factor of each adapter in that subgroup of PCIe slots represented by a separate column in the tables below. Do this for all subgroups which your server will be using.

4- Compare each subgroup's sizing factor total to the subtotal maximum. 4a- If the sizing factor total is less than the subtotal maximum, you easily have adequate bandwidth. 4b- If the total sizing factor total is larger than the subtotal of the column, take action. Review the sizing factors and adjust downward as appropriate. Compare the revised sizing factor total to subgroup maximum. If still too high, take action such as moving adapters out of that subgroup.

5- . Then add the subgroups together and compare to the total bandwidth available for the server. If the adapters within a subgroup have a larger total sizing factor than the subgroup maximum, use the smaller value. 5a- If your total of subgroups is less than or equal to the system total, you should have adequate bandwidth. 5b- If your total of subgroups exceeds the system total, see if additional sizing factor adjustments would be helpful and compare again. If the system total is still exceeded, take appropriate action or make sure expectations have been set with the client.

Important insight: This tool focuses on aggregate bandwidth with subtotals. The "subtotals" or "subsystem limits" are important. Though individual Gen1 or individual Gen2 PCIe slots are the same, groups of Gen1 PCIe slots or groups of Gen2 PCIe slots sometimes share bandwidth. Bandwidth can also be shared with other internal I/O. The table below lays out the subtotals and the total. If you exceed the bandwidth of a subtotal, for example a GX slot, that means you have a bottleneck which will limit max throughput of the adapters located in that group of PCI slots.

Page 35: Power PCIe Adapters 20141211

8231 E2B POWER7 710 1-socket 730 2-socket

total

40 Gb/s n/a n/a 40 Gb/s

40 Gb/s 30 Gb/s n/a 70 Gb/s

40 Gb/s n/a n/a 40 Gb/s

40 Gb/s 30 Gb/s n/a 70 Gb/s

40 Gb/s 30 Gb/s 30 Gb/s 100 Gb/s

For Power 710/730 and for PowerLinux 7R1/R72 … "C" or "D" models with PCIe Gen2 slots8231 E1C / E1D POWER7 710 1-socket 8231 E2C / E2D POWER7 730 2-socket 8246 L1C/L1S/L1D/L1T POWER7 R71 1-socket PowerLinux use same rules as for 7108246 L1C/L1S/L1D/L1T POWER7 R72 2-socket PowerLinux use same rules as for 730

total

60 Gb/s n/a n/a 60 Gb/s

Internals 4 slot plus integrated IO

1st GX slot only for #5266

2nd GX slot only for #5266

710 using 0 GX slots

710 using 1 GX slot

730 using 0 GX slots

730 using 1 GX slot

730 using 2 GX slots

Example: Power 730 B model. (See the table just above this row.) In the system unit (internal four slots) you have two 2-port 8Gb Fibre Channel adapters (sizing factor 16) and one 4-port 1Gb Ethernet adapter (sizing factor 0.4-4). (One of the PCIe slots is empty). You have three 177GB SSD in the six SAS bays run by the integrated SAS adapter (sizing factor up to 10) and you have a 2-port 10Gb Ethernet IVE/HEA (sizing factor 10-20 ). In this example the GX slots are not being used. You get the sizing factors from the "PCIe Adapter" worksheet/tab for the PCIe adapters and from the text above for the integrated I/O. B152

Example: Power 730 B model continued. Totaling the max sizing factors for these components yield 16+16+4+0+10+20 = 66. (16+16+4+0 = PCIe slots and 10+20 = internal.) 66 is larger than the 40 Gb/s max bandwidth available as shown in column C of the 710/730 B Model table above. So you need to review these number to see if a lower more realistic sizing factor value should be used instead of the 16+16+4+0+10+20 maximums. If after the review and assignment of lower sizing factors you are still above 40Gb/s, you need to move some I/O to the 2nd GX slot in an I/O drawer. Or you could switch to a 730 C model and gain a lot of bandwidth.

Internals 6 slots plus integrated IO

1st GX slot only for #EJ0H or SPCN card

2nd GX slot for GX++ adapter for 12X I/IO loop or for #EJ0H

710 using 0 GX slots

Page 36: Power PCIe Adapters 20141211

60 Gb/s 30 Gb/s n/a 90 Gb/s

60 Gb/s n/a n/a 60 Gb/s

60 Gb/s 30 Gb/s n/a 90 Gb/s

60 Gb/s 30 Gb/s 30 Gb/s 120 Gb/s

60 Gb/s 60 Gb/s 120 Gb/s

For Power 720/740 … "B" models8202 E4B POWER7 720 1-socket8205 E6B

total

40 Gb/s n/a n/a 40 Gb/s

40 Gb/s 60 Gb/s n/a 100 Gb/s

40 Gb/s n/a n/a 40 Gb/s

40 Gb/s 60 Gb/s n/a 100 Gb/s

40 Gb/s 60 Gb/s 60 Gb/s 160 Gb/s

For Power 720/740 … "C" or "D" models with PCIe Gen2 slots8202 E4C or E4D POWER7 720 1-socket8205 E6C or E6D

total

60 Gb/s n/a n/a 60 Gb/s

710 using 1 GX slot

730 using 0 GX slots

730 using 1 GX slot

730 using 2 GX slots

730 using 2 GX slots

n/a (slot filled by SPCN card)

POWER7 740 2-socket (if only one socket populated, treat as 720 in this tool)

Internal 4 slots plus integrated IO

1st GX slot for GX++ adapter for 12X I/O loop or for PCIe Riser Card

2nd GX slot for GX++ adaper for 12X I/O loop (740 only)

720 using 0 GX slots

720 using 1 GX slot

740 using 0 GX slots

740 using 1 GX slot

740 using 2 GX slots

POWER7 740 2-socket (if only one socket populated, treat as 720 in this tool)

Internals 6 slots plus integrated IO

1st GX slot for GX++ adapter for 12X I/O loop or for PCIe Riser Card or for #EJ03 & EXP30

2nd GX slot for GX++ adaper for 12X I/O loop (740 only) or for #EJ03 & EXP30.

720 using 0 GX slots

Page 37: Power PCIe Adapters 20141211

60 Gb/s 60 Gb/s n/a 100 Gb/s

60 Gb/s n/a n/a 60 Gb/s

60 Gb/s 60 Gb/s n/a 100 Gb/s

60 Gb/s 60 Gb/s 60 Gb/s 160 Gb/s

For Power 750/755 "B" model Note this has both PCI-X DDR and PCIe Gen1 slots8233 E8B POWER7 750 4 socket (if only one socket populated, 2nd GX slot not enabled8236 E8C POWER7 755 4-socket (note 12X-I/O drawers not supported)

No

total

40 Gb/s n/a n/a 40 Gb/s

40 Gb/s 40 Gb/s n/a 40 Gb/s

40 Gb/s n/a n/a 40 Gb/s

40 Gb/s 40 Gb/s n/a 40 Gb/s

720 using 1 GX slot

740 using 0 GX slots

740 using 1 GX slot

740 using 2 GX slots

Example: Power 720 C model. In the system unit (internal six slots) you have one 2-port 4Gb Fibre Channel adapter (sizing factor 8) and two 2-port 10Gb FCoE adapters (sizing factor 20 each) and a Async/comm card (sizing factor 0.1-0.2) and one required 4-port 1Gb Ethernet adapter in the C7 slot (sizing factor 0.4-4). (One of the PCIe slots is empty). You have four HDD in the six SAS bays run by the integrated SAS adapters (sizing factor up to 10 - probably less here with HDD). You have two PCIe RAID SAS adapters in a PCIe Riser Card each with four SSD (sizing factor 6 each). Totaling the max sizing factors for thes internal components yield 8+20+20+0.2+0+4+10 = 62.2 (PCIe slots = 8+20+20+0.2+4 and internal = 10.) 62.2 is larger than the 60 Gb/s max bandwidth available for these internal slots so you need to need to analyze the sizing factors to determine if they actually use less bandwidth. Let's assume you do so and the revised total is less than 60 ... which we'll call "Adjusted 62.2". example continued below

Next look at the GX slot size factors of 6+6 = 12. 12 is less than 60 so you are ok on this subtotal. Finally since on the Power 720, there is an overlap of bandwidth between the 1st GX slot and internal slots add all these sizing factors together. "Adjusted 62.2" + 12 is less than 100Gb/s so you fit in the system maximum as well as both subtotal maximums. Your configuration's bandwidth requirements should be ok.

Internals 5 PCI slots plus integrated IO

1st GX slot for GX+ adaper for 12X I/O loop

2nd GX slot for GX++ adaper for 12X I/O loop (2-4 socket only)

One socket with 0 12 X loops

One socket with 1 12 X loops

Two-to-four sockets with 0 12X loops

Two-to-four sockets with 1 12X loops

Page 38: Power PCIe Adapters 20141211

40 Gb/s n/a 60 Gb/s 100 Gb/s

40 Gb/s 40 Gb/s 60 Gb/s 100 Gb/s

For Power 750/760 "D" models or the PowerLinux 7R4 with PCIe Gen2 slots8408 E8D POWER7 7509109 RMD POWER7 7608248 L4T PolwerLinux 7R4

With 1 proc DCM 60 Gb/s 60 Gb/s n/a n/a60 Gb/s 60 Gb/s 60 Gb/s n/a

60 Gb/s 60 Gb/s 60 Gb/s 60 Gb/s

For Power 770/780 … "B" models9117 MMB POWER7 7709179 MHB POWER7 780

40 Gb/s 40 Gb/s n/a n/a

40 Gb/s 40 Gb/s 60 Gb/s n/a

40 Gb/s 40 Gb/s 60 Gb/s 60 Gb/s

Two-to-four sockets with 1 12X loops

Two-to-four sockets with 2 12X loops

Internal PCIe slots 1-4

Internal slots 5,6 plus integrated IO

1st GX slot for GX++ adapter for 12X I/O loop or for #1914 & EXP30

2nd GX slot for GX++ adapter for 12X I/O loop or for #1914 & EXP30

With 2 or more proc DCM and one 12X I/O loop

With 2 or more proc DCM and two 12X I/O loop

Per NODE / Processor enclosure

Internal PCIe slots 1-4

Internal PCIe slots 5,6 plus integrated IO

1st GX slot for GX++ adapter for 12X I/O loop

2nd GX slot for GX++ adapter for 12X I/O loop

With 0 12X I/O loops

With 1 12X I/O loop

With 2 12X I/O loops

Page 39: Power PCIe Adapters 20141211

For Power 770/780 … "C" and "D" models with PCIe Gen2 slots9117 MMC / MMD POWER7 7709179 MHC / MHD POWER7 7808412 EAD Power ESE this model has a max of one processor enclosure/node

60 Gb/s 60 Gb/s

60 Gb/s 60 Gb/s 60 Gb/s

60 Gb/s 60 Gb/s 60 Gb/s 60 Gb/s

For Power 795 each processor book has 4 GX++ slots 9119 FHB POWER7 796

GX slot one GX slot two GX slot three GX slot four

60 Gb/s

60 Gb/s 60 Gb/s

60 Gb/s 60 Gb/s 60 Gb/s

60 Gb/s 60 Gb/s 60 Gb/s 60 Gb/s

Optional reading More indepth discussion on sharing of bandwidth by PCIe slots:

SYSTEM unitsPower 710/720/730/740 "B" and "C" and "D" model system units --- All PCIe slots with a system unit are equalPower 750/755 system unit - all PCIe slots are equal -- All PCI-X DDR slots are equal

Per NODE / Processor enclosure

Internal PCIe slots 1-4

Internal slots 5,6 plus integrated IO

1st GX slot for GX++ adapter for 12X I/O loop or for #1914 & EXP30

2nd GX slot for GX++ adapter for 12X I/O loop or for #1914 & EXP30

With 0 12X I/O loops

With 1 12X I/O loop

With 2 12X I/O loops

Per Processor Book

With 1 12X I/O loop

With 2 12X I/O loops

With 3 12X I/O loops

With 4 12X I/O loops

As shown in the tables above, not all PCIe slots are equal. Individually they are equal, but depending on the server or depending on the I/O drawer, some slots share bandwidth with other slots or I/O. The following sentences summarize the tables above for the servers' PCIe slots.

Page 40: Power PCIe Adapters 20141211

Power 795 has no PCI slots in the system unit -- everything attached through GX++ slots

PCIe Riser Card#5610 which can be placed in the 720/740 "B" system unit …. All PCIe slots (Gen1) are equal#5685 which can be placed in the 720/740 "B or C" or "D" system unit …. All PCIe slots (Gen2) are equal

12X-attached PCIe I/O drawers: #5802/5877 (19-inch rack) and #5803/5873 (for 795/595)

Power 750/760 "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but slots 1-4 are on one bus and slots 5-6 are on a separate bus which shares bandwidth with internal controllers

Power 770/780 "B" and "C" and "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but slots 1-4 are on one bus and slots 5-6 are on a separate bus which shares bandwidth with internal controllers

The bandwidth available to a one or more I/O drawers on a 12X loop is obviously the same as the GX slot to which they are attached. Thus for this rough sizing tool's perspective, it usually doesn't make sense to to worry about individual PCIe slots within the I/O drawer. Most of the time trying to optimize PCIe card placement within an I/O drawer for most client workloads won't make that much difference. However, for those people who are focused on the details, the following information is shared. See Info Center for more detail.

#5802/5877 when only one drawer per GX adapter (one per 12X loop). A drawer has 10 PCIe Gen1 slots and two connections to the GX adapter in the server. The ten PCIe slots are in 3 subgroups; Slots 1,2,3 and Slots 4,5,6 and Slots 7,8,9,10. Slots 1,2,3 and Slots 4,5,6 each have a separate connection to the GX adapter. Slots 7,8,9,10 first connect to the Slot 1,2,3 subgroup and share the bandwidth of the Slot 1,2,3 connection to the GX adapter. Thus there may be a slight advantage in putting the highest bandwidth adapters in Slot 4,5,6 and also in putting the adapters with the least latency requirements in Slots 1,2,3 or 4,5,6. HOWEVER, most of the time this will not make much (if any) noticeable difference to the client assuming your total bandwidth need is reasonable.

#5802/5877 when two drawers per GX adapter (two per 12X loop). Each drawer will have a connection to the GX adapter. All three PCIe subgroups in a drawer share that one connection. Depending on the cabling used either Slots 1,2,3 or Slots 4,5,6 will be the "closest" to that connection upstream to the GX adapter. The other two PCIe subgroups will connect through the "closer" PCIe subgroup. HOWEVER, most of the time where the PCIe adapter is placed in the drawer will not make much (if any) noticeable difference to the client. The fact two drawers share the total GX adapter bandwidth can make a difference. So where I/O bandwidth is a consideration, try to use just one I/O drawer per loop. Generally try to balance the bandwidth needs across the to I/O drawers.

#5803/5873 have 20 PCIe slots. Think of this drawer as logically two 10 PCIe drawers inside one drawer. You can attach each half of a drawer to a GX adapter or you can attach both halves of a drawer to single GX adapter. Each of the two logical drawers each are identical to a #5802/5877 described above.

For the extremely analytical reader … Each #5802/5877/5803/5873 PCIe slot subgroup is on its own PCIe 8x PHB. Specifications for this PHB are: BURST = 2 GB/s simplex & 4 GB/s duplex SUSTAINED = 1.6 GB/s simplex & 2.2 GB/s duplex. Most of the time, this level of granular detail is not useful in client configuration sizings.

Page 41: Power PCIe Adapters 20141211

Sizing factors for Integrated I/O, 12-X drawers, Ultra Drawer, 4X IB GX adapter

This tool is intended as a rough rule of thumb to determine when you are approaching the IO limits of a system or IO drawer.

HOWEVER there are many variables and even more combinations of these variables which can impact the actual resulting performance in a real environment. Thus always apply common sense and remember this is designed to be a quick and simple approach ... a rough sizer. This is not a heavy-weight tool full of precise measurements and interactions. Keep in mind the key planning performance words, "it depends" and "your

You will be using the "Sizing Factor" for each PCIe adapter. These are found in the "PCIe Adapter" worksheet/tab of this spreadsheet. Look for

For example, feature code #5769 (10Gb 1-port Ethernet adapter) it has a sizing factor of 5 to 10. If high utilization, use "10" for the value. If low usage, use a "5". Or pick a number "6" or "7" or "8" or "9" if you think it is somewhere in between.

Or another example, feature code #5735 (8Gb 2-port Fibre Channel Adapter) has a sizing factor of 16 assuming both ports are being used. But if

Totaling all the sizing factors for all the adapters will give you a good idea if you are coming close to a performance limit for your I/O. You will be looking at an overall system performance limit as well as "subsystem limits" or "subtotal". Subsystem limits are maximums per an individual GX bus, system unit, I/O drawer, PCIe Riser Card, etc. The maximum Gb/s for each subtotal/subsystem and the overall POWER7 system are found

: these values do NOT translate directly into MB/s numbers. This tool builds on the observation that most adapters are rarely run at 100% utilization in real client application environments. Thus to help avoid over configuring for most shops, the tool assigns max performance factors that

: If you are correlating these bandwidth maximums to the bandwidth values provided in marketing presentations and in the facts/features document, observe that there is a difference. The marketing materials are correct, but use peak bandwidth and assume full duplex connections. This sizing tool takes a more conservative approach which probably represents more typical usage by real clients. It assumes simplex connections

For example. For simplicity, marketing materials provide a single performance value of 20Gb/s for GX adapters/slots. The details behind this are that a GX adapter or 12X Hub specifications are: BURST = 10 GB/s simplex & 20 GB/s duplex SUSTAINED = 5 GB/s DMA simplex & 8

This tool uses Gb (bit) vs GB (byte) because there is a lot of variability between different protocols/adapters and how their GB ratings translate into

Page 42: Power PCIe Adapters 20141211

#ESA1 / ESA2 runs only SSD so use 28 per adapter unless lightly loaded or unless not that many SSD. Pairs are optional.

~~ #2053 / 2054 / 2055 runs only SSD so use a 6 if all four SSD are present. These adapters are not paired (but can be mirrored).

Remember to ask those common sense questions. "Are these adapters being run at the same time?" For example if you have an 8G FC port that only goes to a tape that only runs one night a week when other adapters are not busy, you probably do not need to count it at all in the performance analysis. Or another good question, "Do you have redundant adapters or ports which are there only in case the primary adapter/port

ports, simplex line rates are assumed, not duplex. Based on observations over the years this is a good simplification. Many client workloads aren't hitting the point where these adapters' duplex capability is a factor in calculating throughput. If your environment is a real exception and you know you have a really heavy workload using duplex, then add another 20-25% to the sizing factor.

The actual real data rates in most applications are typically less than the line rate. This is especially true for 10Gb Ethernet where many applications run the adapter at less than 50% of line rates. If you know this is true for you, cut the sizing factor in half (think of it as a 5Gb Ethernet

Similarly, if you have an 8Gb or 16Gb Fibre Channel adapter and it is attached to a 4Gb switch, then treat it like a 4Gb adapter (cut the 8Gb sizing factor in half). Likewise if you have a 16Gb FC adapter on an 8Gb switch, treat it lke an 8Gb adapter.

Treat these adapters based on the mix of work done. If using one port for Fibre Channel only, treat it just like the 8Gb Fibre Channel adapter port above. If using a port solely for NIC traffic, treat it like a 10Gb Ethernet port above. If a port has mixed FC and Ethernet workloads, assign a sizing factor based on the mix.

Don't be surprised by SAS adapters max throughput numbers. A 2-port, 6Gb adapter is not 2 x 6 = 12Gb/s multiplication. You are over looking the insight that each port/connector is a "x4" connection. Four paths per connection yielding a 2 x 6 x 4 = 48Gb/s outcome. It's bigger than a 2-port 8Gb Fibre Channel card ( 2 x 8 = 16), but then you could go into duplex mode for FC and 2 x 16 = 32 ... so much closer to 48 of the SAS 2-port. Also remember most SAS adapters run in pairs so that makes the comparison more challenging. "It all depends".

Usually there are a lot of SAS ports for connectivity and the per port workload is lighter. And remember there can be a huge, huge difference

Also note there is a big difference in typical workload whether there is a lot of data / database workload and something easy like AIX/Linux boot

Transaction workloads focus on IOPS (I/O Operations Per Second) and have lower data rates. Reduce the SAS sizing factor if HDD is running transaction workload. You would probably reduce the sizing factor even if SSD running transactions workloads, but SSD are so fast you might leave the value higher even if transaction workload. Alternatively, if you are doing data mining or engineering scientific, or handling large video files where there are large block transfers, then Gb/s are high and use the larger sizing factor for the SAS controller.

~~ #5901 / 5278 is a 6 if running lots of drives with large data blocks or is a 1 if just supporting boot drives. If configured as a pair, or as dual adapters, each adapter has that same sizing factor. So a sizing factor for a pair could be up to 12. Paris are optional.

~~ #5805 / 5903 is an 8 if running lots of drives with large data blocks (especially SSDs) or a 1 if just doing boot drives. These adapters always work as pairs, and each of the pair has this sizing factor. The total sizing factor for a pair could thus be up to 16 with large block workloads.

~~ #5913 or #ESA3 is a 30 if running lots of SSDs or a 16 if running lots of HDDs. These adapters always work as pairs, and each of the pair has this sizing factor. The total sizing factor for a pair could thus be up to 60 with lots of SSD.

Page 43: Power PCIe Adapters 20141211

note that "lots" of SSD can vary whether you are running the newer 387GB SSD which are faster and have higher throughput than the 177GB SSD.

Sizing factors for Integrated I/O, 12-X attached I/O drawers, EXP30 Ultra Drawer and 4X IB adapter

NOTE: the table below assumes #5685 is used …. If #5610 is used, subtract 20 Gb/s from the system max and the riser card

The tool below ignores differences in Gen2 PCIe adapters in Gen1 PC slots. You will have to keep this in mind if using a #5913 or #ESA1 (SAS adapters) and assume a max of 15 vs 30. (remember IOPS is not usually related to bandwidth limitations.) Most of the time Gen2

The 710 through 780 have integrated controllers for running HDD/SSD, DVD, tape, USB ports. The "B" models have IVE/HEA which are 10Gb or 1Gb Ethernet. You need to include these in the calculations. You might ignore the DVD and USB as they are generally very small numbers whose peaks (if any) happen at off hours. Likewise the 1Gb Ethernet is small. But If you have integrated 10G E or the Dual RAID SAS options you need to include them in the calculations. Treat the integrated 10Gb Ethernet adapter like a PCIe 10Gb Ethernet adapter with the same number of 10Gb

For a rough guide for the for the integrated Dual RAID controllers: You may be up to a 10 for the integrated pair (not 10 each). The 10 would be if you have some SSD or a lot of HDDs doing large block I/O. Remember that you can attach an #5887 EXP24S to the SAS port on the back of the server and then this same pair of integrated SAS controllers is handling another 24 SAS bays. Increase the sizing factor closer to 10 if running lots

If you are NOT using the "dual" integrated SAS adapter option and using a "single" integrated SAS adapter, use the same performance factor as a #5901 / 5278 SAS adapter with a performance sizing factor or 1-6. For the Power 770/780 you can have two of these single adapters, each with their own sizing factor. If a Power 710/720/730/740/750/755 you would have just one adapter with one sizing factor.

This tool focuses on the total bandwidth available to you. These I/O drawers attach to GX++ Adapters and it is that GX adapter which sets the maximum bandwidth available to all the adapters in that drawer. If you have two I/O drawers on that same GX++ adapter, the two drawers share that same total bandwidth of the GX++ adapter. Having two vs one #5802/5877 provides some redundancy and obviously provides more slots, but does NOT increase max bandwidth. Note that each #5803/5873 is logically two drawers and for higher bandwidth you would cable the drawer to

For this estimator tool, one GX+ bus will support 40 Gb/s and a GX++ bus will support 60 Gb/s. (see Note2 above) GX+/++ buses can "internal" or they can be "external". External GX buses are accessed through a GX slot. All of the external GX slots are GX++ on POWER7 servers except for

Note that some servers share a bus for internal as well as for GX slots. Thus you'll see below times when the total bandwidth is less than the sum

Note the Power 750 has two different GX adapters. The #5609 is GX+ (40Gb) and #5616 is GX++ (60Gb). If a #5609 is placed in the second GX+

Page 44: Power PCIe Adapters 20141211

1- Look up the sizing factor for each of the adapters. Make sure you know in which PCI slot the adapter(s) will be used.

3- For the subgroups which have internal I/O, also add the sizing factor(s) for the internal I/O

The #EDR1 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "D" model 710/720/730/740/750/760/770/780 or a "C" model 770/780. .The #5888 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "C" model 710/720/730/740. Inside the Ultra Drawer are two very high performance SAS controllers, more powerful than the #5913/#EAS3. So assume a sizing factor value of 60 for this drawer if a lot of busy SSD. Note each SAS controller has a PCIe cable connecting it to the GX++ PCIe adapter, but the cables can be plugged into different GX++ PCIe adapters. Thus assume a sizing factor of 30 for each cable plugged into the GX slot.The GX++ PCIe adapters are the #EJ03 and #EJ0H.and #1914 #EJ03 plugs into a 720/740. #EJ0H plugs into a 710/730. #1914

This specialized adapter is used only on the Power 710/730 "B" model 8231-E2B to connect to a 4x Infiniband switch in server clustering environments. (On 710/730 "C" models with PCIe Gen2 slots, the QDR PCIe adapter is used instead of this DDR option.) The adapter supports up to a max of 30Gb/s. Actual usage is HIGHLY dependent upon the clustering application. Use a sizing factor of 2 - 30 depending on the

(see also the example just below Power 710/730 B models and the example just below Power 720/740 C

2- Add the maximum sizing factor of each adapter in that subgroup of PCIe slots represented by a separate column in the tables

4- Compare each subgroup's sizing factor total to the subtotal maximum. 4a- If the sizing factor total is less than the subtotal maximum, you easily have adequate bandwidth. 4b- If the total sizing factor total is larger than the subtotal of the column, take action. Review the sizing factors and adjust downward as appropriate. Compare the revised sizing factor total to subgroup maximum.

5- . Then add the subgroups together and compare to the total bandwidth available for the server. If the adapters within a subgroup have a larger total sizing factor than the subgroup maximum, use the smaller value. 5a- If your total of subgroups is less than or equal to the system total, you should have adequate bandwidth. 5b- If your total of subgroups exceeds the system total, see if additional sizing factor adjustments would be helpful and compare again. If the system total is still exceeded, take appropriate action

This tool focuses on aggregate bandwidth with subtotals. The "subtotals" or "subsystem limits" are important. Though individual Gen1 or individual Gen2 PCIe slots are the same, groups of Gen1 PCIe slots or groups of Gen2 PCIe slots sometimes share bandwidth. Bandwidth can also be shared with other internal I/O. The table below lays out the subtotals and the total. If you exceed the bandwidth of a subtotal, for example a GX slot, that means you have a bottleneck which will limit max throughput of the adapters located in that group of PCI slots.

Page 45: Power PCIe Adapters 20141211

#5266 is for Infiniband switch connection

POWER7 R71 1-socket PowerLinux use same rules as for 710POWER7 R72 2-socket PowerLinux use same rules as for 730

#EJ0H is for #5888 EXP30 Ultra SSD I/O Drawer attach

(See the table just above this row.) In the system unit (internal four slots) you have two 2-port 8Gb Fibre Channel adapters (sizing factor 16) and one 4-port 1Gb Ethernet adapter (sizing factor 0.4-4). (One of the PCIe slots is empty). You have three 177GB SSD in the six SAS bays run by the integrated SAS adapter (sizing factor up to 10) and you have a 2-port 10Gb Ethernet IVE/HEA (sizing factor 10-20 ). In this example the GX slots are not being used. You get the sizing factors from the "PCIe Adapter" worksheet/tab for the PCIe adapters and from the text above for

Example: Power 730 B model continued. Totaling the max sizing factors for these components yield 16+16+4+0+10+20 = 66. (16+16+4+0 = PCIe slots and 10+20 = internal.) 66 is larger than the 40 Gb/s max bandwidth available as shown in column C of the 710/730 B Model table above. So you need to review these number to see if a lower more realistic sizing factor value should be used instead of the 16+16+4+0+10+20 maximums. If after the review and assignment of lower sizing factors you are still above 40Gb/s, you need to move some I/O to the 2nd GX slot in an I/O drawer. Or you could switch to a 730 C model and gain a lot of bandwidth.

Page 46: Power PCIe Adapters 20141211

two #EJ0H EXP30 Ultra Drawer connections in this example

#5610 PCIe Riser card is Gen1 and has a max of 40Gb/s

#5685 PCIe Riser card is Gen2 and has a max of 60Gb/s

#EJ0G GX++ adapter for 12X I/O loop on 730 uses BOTH GX slots and covers one x4 PCIe slot and one x8 PCIe slot

(if only one socket populated, treat as 720 in this tool)

Notes: A 4-core 720 does not support an GX++ adapter for an I/O loop. Can attach 12X to 4X converter cable to the GX++ adapter for DDR switch connection.

(if only one socket populated, treat as 720 in this tool)

Notes: A 4-core 720 does not support an GX++ adapter for an I/O loop. #5610 not supported, use #5685. Use QDR PCIe adapter, not 12X to 4X converter cable.

Page 47: Power PCIe Adapters 20141211

(yes, total is 100 Gb/s, not 120Gb/s)

(yes, total is 100 Gb/s, not 120Gb/s)

(yes, total is 160 Gb/s, not 180Gb/s)

POWER7 750 4 socket (if only one socket populated, 2nd GX slot not enabledPOWER7 755 4-socket (note 12X-I/O drawers not supported)

5 internal slots = 3 PCI-X and 2 PCIe slots

(yes, total is 40 Gb/s, not 80Gb/s)

(yes, total is 40 Gb/s, not 80Gb/s)

In the system unit (internal six slots) you have one 2-port 4Gb Fibre Channel adapter (sizing factor 8) and two 2-port 10Gb FCoE adapters (sizing factor 20 each) and a Async/comm card (sizing factor 0.1-0.2) and one required 4-port 1Gb Ethernet adapter in the C7 slot (sizing factor 0.4-4). (One of the PCIe slots is empty). You have four HDD in the six SAS bays run by the integrated SAS adapters (sizing factor up to 10 - probably less here with HDD). You have two PCIe RAID SAS adapters in a PCIe Riser Card each with four SSD (sizing factor 6 each). Totaling the max sizing factors for thes internal components yield 8+20+20+0.2+0+4+10 = 62.2 (PCIe slots = 8+20+20+0.2+4 and internal = 10.) 62.2 is larger than the 60 Gb/s max bandwidth available for these internal slots so you need to need to analyze the sizing factors to determine if they actually use less bandwidth. Let's assume you do so and

example continued below

Next look at the GX slot size factors of 6+6 = 12. 12 is less than 60 so you are ok on this subtotal. Finally since on the Power 720, there is an overlap of bandwidth between the 1st GX slot and internal slots add all these sizing factors together. "Adjusted 62.2" + 12 is less than 100Gb/s so you fit in the system maximum as well as both subtotal maximums. Your configuration's bandwidth requirements should be ok.

Page 48: Power PCIe Adapters 20141211

(yes, total is 100 Gb/s, not 140Gb/s)

total

100 Gb/s (yes, total is 100 Gb/s, not 120 Gb/s)160 Gb/s (yes, total is 160 Gb/s, not 120 Gb/s)

200 Gb/s (yes, total is 200 Gb/s, not 240 Gb/s)

total

80 Gb/s

140 Gb/s

180 Gb/s

Page 49: Power PCIe Adapters 20141211

this model has a max of one processor enclosure/node

total

100 Gb/s (yes, total is 100 Gb/s, not 120 Gb/s)

160 Gb/s (yes, total is 160 Gb/s, not 120 Gb/s)

200 Gb/s (yes, total is 200 Gb/s, not 240 Gb/s)

total

60 Gb/s

120 Gb/s

180 Gb/s

240 Gb/s

Optional reading More indepth discussion on sharing of bandwidth by PCIe slots:

Power 710/720/730/740 "B" and "C" and "D" model system units --- All PCIe slots with a system unit are equal

As shown in the tables above, not all PCIe slots are equal. Individually they are equal, but depending on the server or depending on the I/O drawer, some slots share bandwidth with other slots or I/O. The following sentences summarize the tables above for the servers' PCIe slots.

Page 50: Power PCIe Adapters 20141211

#5685 which can be placed in the 720/740 "B or C" or "D" system unit …. All PCIe slots (Gen2) are equal

Power 750/760 "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but slots 1-4 are on one bus and slots 5-6 are

Power 770/780 "B" and "C" and "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but slots 1-4 are on one bus

The bandwidth available to a one or more I/O drawers on a 12X loop is obviously the same as the GX slot to which they are attached. Thus for this rough sizing tool's perspective, it usually doesn't make sense to to worry about individual PCIe slots within the I/O drawer. Most of the time trying to optimize PCIe card placement within an I/O drawer for most client workloads won't make that much difference. However, for those people who are focused on the details, the following information is shared. See Info Center for more detail.

#5802/5877 when only one drawer per GX adapter (one per 12X loop). A drawer has 10 PCIe Gen1 slots and two connections to the GX adapter in the server. The ten PCIe slots are in 3 subgroups; Slots 1,2,3 and Slots 4,5,6 and Slots 7,8,9,10. Slots 1,2,3 and Slots 4,5,6 each have a separate connection to the GX adapter. Slots 7,8,9,10 first connect to the Slot 1,2,3 subgroup and share the bandwidth of the Slot 1,2,3 connection to the GX adapter. Thus there may be a slight advantage in putting the highest bandwidth adapters in Slot 4,5,6 and also in putting the adapters with the least latency requirements in Slots 1,2,3 or 4,5,6. HOWEVER, most of the time this will not make much (if any) noticeable

#5802/5877 when two drawers per GX adapter (two per 12X loop). Each drawer will have a connection to the GX adapter. All three PCIe subgroups in a drawer share that one connection. Depending on the cabling used either Slots 1,2,3 or Slots 4,5,6 will be the "closest" to that connection upstream to the GX adapter. The other two PCIe subgroups will connect through the "closer" PCIe subgroup. HOWEVER, most of the time where the PCIe adapter is placed in the drawer will not make much (if any) noticeable difference to the client. The fact two drawers share the total GX adapter bandwidth can make a difference. So where I/O bandwidth is a consideration, try to use just one I/O drawer per loop. Generally

#5803/5873 have 20 PCIe slots. Think of this drawer as logically two 10 PCIe drawers inside one drawer. You can attach each half of a drawer to a GX adapter or you can attach both halves of a drawer to single GX adapter. Each of the two logical drawers each are identical to a #5802/5877

For the extremely analytical reader … Each #5802/5877/5803/5873 PCIe slot subgroup is on its own PCIe 8x PHB. Specifications for this PHB are: BURST = 2 GB/s simplex & 4 GB/s duplex SUSTAINED = 1.6 GB/s simplex & 2.2 GB/s duplex. Most of the time, this level of

Page 51: Power PCIe Adapters 20141211

This wonderfully complete table thanks to Rakesh Sharma of the IBM Austin Development team

Ethernet port attributesIntegrated Multifunction Cards

Adapter feature code # EN10

Attribute DescriptionEther II and IEEE 802.3 Ether II and IEEE 802.3 encapsulated frames x802.3i 10Mb Ethernet over twisted pair, 10Base-T

802.3u 100Mb Ethernet over twisted pair, 100Base-TX x802.3x Full duplex and flow control x802.3.z 1Gb Ethernet over fiber, 1000Base-X802.3ab 1Gb Ethernet over twisted pair, 1000Base-T x802.3ad/802.1AX Link Aggregation/LACP/Load Balancing x802.3ae 10Gb Ethernet over fiber

SFP+ Direct Attach x802.3ak 10Gb Ethernet over CX4, 10GBase-CX4802.3an 10Gb Ethernet over twisted pair, 10GBase-T x802.3aq 10Gb Ethernet over multimode fiber, 10GBase-LR802.1Q VLAN IEEE VLAN tagging xEtherChannel Cisco EtherChannel (Manual Link Aggregation) xNIC Failover Network Interface Backup (NIB) Support xJumbo Frames Ethernet jumbo frames x802.1Qaz Enhanced Transmission Selection x802.1Qbb Priority based flow control x802.1Q/p Ethernet frame priorities xMulticast Ethernet multicast xChecksum Offload TCP/UDP/IP hardware checksum offload xVIOS Virtual IO Server Support xRoCE RDMA over Converged EthernetiWARP RDMA over Ethernet using iWARP ProtocolFCoE Fibre Channel over Ethernet xiSCSI Software iSCSI provided by OS xNIM Network Boot Support x

TSO xLRO Large Receive Offload x

RFS (recv flow steering)

Attribute or IEEE Standard

2 10Gb CNA copper + 2 10/1Gb

10Gb Ethernet over Direct Attach (DA), 10GSFP+Cu, Twin-ax

TCP Segmentation Offload/Large Segment Offload

OpenOnload (user space TCP/IP)

Page 52: Power PCIe Adapters 20141211

SRIOV Single Root I/O Virtualization x

(SRIOV only on POWER7+ 770/780 with latest levels of firmware as of April 2014)

Page 53: Power PCIe Adapters 20141211

This wonderfully complete table thanks to Rakesh Sharma of the IBM Austin Development team

Integrated Multifunction Cards 10Gb Specialty Cards

EN11 1768 1769

x x x x x x x x xx x

x x x x x xx x x x x x x x x

x x x x x xx x x x x x x x xx x x x

x x x x

x

x x x x x x x x xx x x x x x x x xx x x x x x x x xx x x x x x x x xx x x xx x x xx x x x x x x xx x x x x x x x xx x x x x x x x xx x x x x x

x xx x

x xx x x x x x x x xx x x x x x

x x x x x x x x xx x x x x x x x x

xcapable

Houston Opt

Houston CU

EN0H / EN0J

EN0K / EN0L

EC27 / EC28

EC29 / EC30

5279 / 5745

5280 / 5744

EC2G / EC2J / EL39

2 10Gb CNA SR + 2 10/1Gb

2 10Gb copper + 2 1Gb

2 10Gb SR + 2 1Gb

2 10Gb CNA + 2 1GB

2 10Gb CNA + 2 1GB

2 10Gb copper RoCE

2 10Gb SR RoCE

2 10Gb copper + 1Gb

2 10Gb SR + 1Gb

2 10Gb copper + Open-OnLoad

Page 54: Power PCIe Adapters 20141211

x x x

Page 55: Power PCIe Adapters 20141211

10Gb Specialty Cards 10Gb Ethernet Cards 1Gb Ethernet Cards

5772

4 1Gb 2 1Gbx x x x x x x x x x

x x

x xx x x x x x x x x x

xx x

x x x x x x x x x xx x x

x xx

xx x x x x x x x x xx x x x x x x x x xx x x x x x x x x xx x x x x x x x x x

xxx x x x x x x x x

x x x x x x x x x xx x x x x x x x x x

x x x x x x x x x

xx x x x x x x x x x

x x x x x x x x x

x x x x x x x x x xx x x x x x x x

capable

EC2H / EC2K / EL3A

5270 / 5708

5286 / 5288

5284 / 5287

5275 / 5769

5272 / 5732

5260 / 5899

5274 / 5768

5281 / 5767

2 10Gb copper +

2 10Gb CNA SR

2 10Gb copper

2 10Gb SR

1 10Gb LR opt

1 10Gb SR

1 10Gb CX4

2 1Gb opt SX

Page 56: Power PCIe Adapters 20141211

1Gb Ethernet Cards

4 1Gbxx

xx

xx

xxxx

xxxx

xx

x

5271 / 5717

Page 57: Power PCIe Adapters 20141211

Important to read this introduction

Feature Name2053 PCIe LP RAID & SSD SAS Adapter 3Gb2054 PCIe LP RAID & SSD SAS Adapter 3Gb2055 PCIe LP RAID & SSD SAS Adapter 3Gb w/ Blind Swap Cassette2728 4 port USB PCIe Adapter2893 PCIe 2-Line WAN w/Modem2894 PCIe 2-Line WAN w/Modem CIM4367 Package 5X #2055 & SSD4377 Package 5X #2055 & SSD4807 PCIe Crypto Coprocessor No BSC 4765-0014808 PCIe Crypto Coprocessor Gen3 BSC 4765-0014809 PCIe Crypto Coprocessor Gen4 BSC 4765-0015260 PCIe2 LP 4-port 1GbE Adapter5269 PCIe LP POWER GXT145 Graphics Accelerator5270 PCIe LP 10Gb FCoE 2-port Adapter5271 PCIe LP 4-Port 10/100/1000 Base-TX Ethernet Adapter5272 PCIe LP 10GbE CX4 1-port Adapter5273 PCIe LP 8Gb 2-Port Fibre Channel Adapter5274 PCIe LP 2-Port 1GbE SX Adapter5275 PCIe LP 10GbE SR 1-port Adapter5276 PCIe LP 4Gb 2-Port Fibre Channel Adapter5277 PCIe LP 4-Port Async EIA-232 Adapte5278 PCIe LP 2-x4-port SAS Adapter 3Gb5279 PCIe2 LP 4-Port 10GbE&1GbE SFP+ Copper&RJ455280 PCIe2 LP 4-Port 10GbE&1GbE SR&RJ45 Adapter5281 PCIe LP 2-Port 1GbE TX Adapter5283 PCIe2 LP 2-Port 4X IB QDR Adapter 40Gb5284 PCIe2 LP 2-port 10GbE SR Adapter

Power Systems obtains many PCIe adapters from nonIBM suppliers which leverage technology already created by the PCIe supplier. Sometimes IBM works extensively with PCIe supplier on special functions/capabilities to interface into Power Systems and to comply with standards/structures/interfaces established by Power Systems. Sometimes it is a more modest amount of changes which are made to fit into the Power Systems infrastructure. Very rarely is an adapter brought out by IBM Power Systems with no changes from what the supplier offers generically.

For AIX, IBM i, and VIOS ... For adapters ordered as a Power System feature code, it is IBM who ships the underlying drivers and and it is IBM who provides support of the hardware/software combination. IBM development may work with the PCIe supplier and leverage base supplier driver logic, but IBM is the single point of contact for all support issues from a client perspective. If the problem appears to be related to the PCIe adapter hardware design or manufacturing or by underlying supplier-provided driver logic, IBM support and/or development engineers will work with the adapter supplier if needed.

For Linux... For adapters ordered as a Power System feature code, the drivers used are the generally available, open source drivers for Linux. IBM works with Linux distributors and adapter vendors to help ensure testing of the hardware/software has been done. Support of the hardware/software combination is provided by the organization selected/contracted by the client. That organization could be groups like SUSE, Red Hat, IBM or even client self-supported.

Given the above, usually knowing the specific adapter supplier is of no interest to a client However, occassionlly it is useful to know who IBM worked with to provide PCIe adapters for Power Systems, especially in a Linux enviornment where a user is doing something unusual.

Feature Code

Page 58: Power PCIe Adapters 20141211

5285 PCIe2 2-Port 4X IB QDR Adapter 40Gb5286 PCIe2 LP 2-Port 10GbE SFP+ Copper Adapter5287 PCIe2 2-port 10GbE SR Adapter5288 PCIe2 2-Port 10GbE SFP+ Copper Adapter5289 2 Port Async EIA-232 PCIe Adapter5290 PCIe LP 2-Port Async EIA-232 Adapter5708 10Gb FCoE PCIe Dual Port Adapter5717 4-Port 10/100/1000 Base-TX PCI Express Adapter5729 PCIe2 8Gb 4-port Fibre Channel Adapter5732 10 Gigabit Ethernet-CX4 PCI Express Adapter5735 8 Gigabit PCI Express Dual Port Fibre Channel Adapter5744 PCIe2 4-Port 10GbE&1GbE SR&RJ45 Adaptef5745 PCIe2 4-Port 10GbE&1GbE SFP+Copper&RJ45 Adapter5748 POWER GXT145 PCI Express Graphics Accelerator5767 2-Port 10/100/1000 Base-TX Ethernet PCI Express Adapter5768 2-Port Gigabit Ethernet-SX PCI Express Adapter5769 10 Gigabit Ethernet-SR PCI Express Adapter5772 10 Gigabit Ethernet-LR PCI Express Adapter5773 4 Gigabit PCI Express Single Port Fibre Channel Adapter5774 4 Gigabit PCI Express Dual Port Fibre Channel Adapter5785 4 Port Async EIA-232 PCIe Adapter5805 PCIe 380MB Cache Dual - x4 3Gb SAS RAID Adapter5899 PCIe2 4-port 1GbE Adapter5901 PCIe Dual-x4 SAS Adapter5903 PCIe 380MB Cache Dual - x4 3Gb SAS RAID Adapter5913 PCIe2 1.8GB Cache RAID SAS Adapter Tri-port 6Gb9055 2-Port 10/100/1000 Base-TX Ethernet PCI Express Adapter9056 PCIe LP 2-Port 1GbE TX AdapterEC27 PCIe2 LP 2-Port 10GbE RoCE SFP+ AdapterEC28 PCIe2 2-Port 10GbE RoCE SFP+ AdapterEC29 PCIe2 LP 2-Port 10GbE RoCE SR AdapterEC2G PCIe2 LP 2-port 10GbE SFN6122F AdapterEC2H PCIe2 LP 2-port 10GbE SFN5162F AdapterEC2J PCIe2 2-port 10GbE SFN6122F AdapterEC2K PCIe2 2-port 10GbE SFN5162F AdapterEC30 PCIe2 2-Port 10GbE RoCE SR AdapterEJ0J PCIe3 RAID SAS Adapter Quad-port 6GbEJ0L PCIe3 12GB Cache RAID SAS Adapter Quad-port 6GbEJ0X PCIe3 SAS Tape Adapter Quad-port 6GbEL09 PCIe LP 4Gb 2-Port Fibre Channel AdapterEL10 PCIe LP 2-x4-port SAS Adapter 3GbEL11 PCIe2 LP 4-port 1GbE AdapterEL2K PCIe2 LP RAID SAS Adapter Dual-port 6GbEL2M PCIe LP 2-Port 1GbE TX AdapterEL2N PCIe LP 8Gb 2-Port Fibre Channel AdapterEL2P PCIe2 LP 2-port 10GbE SR AdapterEL2Z PCIe2 LP 2-Port 10GbE RoCE SR Adapter

EL38 PCIe2 LP 4-port (10Gb FCoE & 1GbE) SR&RJ4EL39 PCIe2 LP 2-port 10GbE SFN6122F Adapter

Page 59: Power PCIe Adapters 20141211

EL3A PCIe2 LP 2-port 10GbE SFN5162F AdapterEN0A PCIe2 16Gb 2-port Fibre Channel AdapterEN0B PCIe2 LP 16Gb 2-port Fibre Channel AdapterEN0H PCIe2 4-port (10Gb FCoE & 1GbE) SR&RJ4EN0J PCIe2 LP 4-port (10Gb FCoE & 1GbE) SR&RJ4EN0Y PCIe2 LP 8Gb 4-port Fibre Channel AdapterEN13 PCIe1 1-port Bisync AdapterEN14 PCIe1 1-port Bisync Adapter CIMES09 IBM Flash Adapter 90 (PCIe2 0.9TB)ESA1 PCIe2 RAID SAS Adapter Dual-port 6GbESA2 PCIe2 LP RAID SAS Adapter Dual-port 6GbESA3 PCIe2 1.8GB Cache RAID SAS Adapter Tri-port 6Gb CR

Page 60: Power PCIe Adapters 20141211

Important to read this introduction

Adapter Description Manufacturer of the PCIe cardSAS-SSD Adapter - Double-wide- 4 SSD module bays IBM designed, manufactured specifically for IBM SAS-SSD Adapter - Double-wide- 4 SSD module bays IBM designed, manufactured specifically for IBM SAS-SSD Adapter - Double-wide- 4 SSD module bays IBM designed, manufactured specifically for IBM USB 4-port IBM designed, manufactured specifically for IBM WAN 2 comm ports, 1 port with modem IBM designed, manufactured specifically for IBM WAN CIM 2 comm ports, 1 port with modem IBM designed, manufactured specifically for IBM SAS-SSD Adapter - quantity 5 2055 + 20 SSD modules IBM designed, manufactured specifically for IBM SAS-SSD Adapter - quantity 5 2055 + 20 SSD modules IBM designed, manufactured specifically for IBM Cryptographic Coproc No BSC IBM designed, manufactured specifically for IBM Cryptographic Coproc w/ BSC3 IBM designed, manufactured specifically for IBM Cryptographic Coproc w/ BSC4 IBM designed, manufactured specifically for IBM Ethernet 1Gb 4-port - TX / UTP / copper BroadcomGraphics - POWER GXT145 MatroxCNA (FCoE) 2-port 10Gb - optical SR QlogicEthernet 1Gb 4-port - TX / UTP / copper IntelEthernet 10Gb 1-port - copper CX4 ChelsioFibre Channel 8Gb 2-port EmulexEthernet 1Gb 2-port - optical SX IntelEthernet 10Gb 1-port - optical SR ChelsioFibre Channel 4Gb 2-port EmulexAsync 4-port EIA-232 DigiSAS 0GB Cache, no RAID5/6 2-port 3Gb IBM designed, manufactured specifically for IBM Ethernet 10Gb+1Gb 4Ports: 2x10Gb - copper twinax & 2x1Gb UTP ChelsioEthernet 10Gb+1Gb 4Ports 2x10Gb - optical SR & 2x1Gb UTP cop ChelsioEthernet 1Gb 2-port - TX / UTP / copper IntelQDR 2-port 4X IB - 40Gb MellanoxEthernet 10Gb 2-port - optical SR Emulex

Power Systems obtains many PCIe adapters from nonIBM suppliers which leverage technology already created by the PCIe supplier. Sometimes IBM works extensively with PCIe supplier on special functions/capabilities to interface into Power Systems and to comply with standards/structures/interfaces established by Power Systems. Sometimes it is a more modest amount of changes which are made to fit into the Power Systems infrastructure. Very rarely is an adapter brought out by IBM Power Systems with no changes from what the

For AIX, IBM i, and VIOS ... For adapters ordered as a Power System feature code, it is IBM who ships the underlying drivers and and it is IBM who provides support of the hardware/software combination. IBM development may work with the PCIe supplier and leverage base supplier driver logic, but IBM is the single point of contact for all support issues from a client perspective. If the problem appears to be related to the PCIe adapter hardware design or manufacturing or by underlying supplier-provided driver logic, IBM support and/or development engineers will work with the adapter supplier if needed.

For Linux... For adapters ordered as a Power System feature code, the drivers used are the generally available, open source drivers for Linux. IBM works with Linux distributors and adapter vendors to help ensure testing of the hardware/software has been done. Support of the hardware/software combination is provided by the organization selected/contracted by the client. That organization could be groups like SUSE, Red Hat, IBM or even client self-supported.

usually knowing the specific adapter supplier is of no interest to a client However, occassionlly it is useful to know who IBM worked with to provide PCIe adapters for Power Systems, especially in a Linux enviornment where a user is doing something unusual.

Page 61: Power PCIe Adapters 20141211

QDR 2-port 4X IB - 40Gb MellanoxEthernet 10Gb 2-port - copper twinax EmulexEthernet 10Gb 2-port - optical SR EmulexEthernet 10Gb 2-port - copper twinax EmulexAsync 2-port EIA-232 DigiAsync 2-port EIA-232 DigiCNA (FCoE) 2-port 10Gb - optical SR QlogicEthernet 1Gb 4-port - TX / UTP / copper IntelFibre Channel 8Gb 4-port QlogicEthernet 10Gb 1-port - copper CX4 ChelsioFibre Channel 8Gb 2-port EmulexEthernet 10Gb+1Gb 4Ports 2x10Gb - optical SR & 2x1Gb UTP cop ChelsioEthernet 10Gb+1Gb 4Ports: 2x10Gb - copper twinax & 2x1Gb UTP ChelsioGraphics - POWER GXT145 MatroxEthernet 1Gb 2-port - TX / UTP / copper IntelEthernet 1Gb 2-port - optical SX IntelEthernet 10Gb 1-port - optical SR ChelsioEthernet 10Gb 1-port - optical LR IntelFibre Channel 4Gb 1-port EmulexFibre Channel 4Gb 2-port EmulexAsync 4-port EIA-232 DigiSAS 380MB cache, RAID5/6, 2-port 3Gb IBM designed, manufactured specifically for IBM Ethernet 1Gb 4-port - TX / UTP / copper BroadcomSAS 0GB Cache, no RAID5/6 2-port 3Gb IBM designed, manufactured specifically for IBM SAS 380MB cache, RAID5/6, 2-port 3Gb -- WITHDRAWN IBM designed, manufactured specifically for IBM SAS 1.8GB Cache, RAID5/6 - 3-port, 6Gb IBM designed, manufactured specifically for IBM Ethernet 1Gb low price, max qty1 for 720/740 2-port 1Gb - TX UTP IntelEthernet 1Gb low price, max qty1 for 710/730 2-port - TX UTP coppeIntelEthernet 10Gb 2-port RoCE - copper twinax MellanoxEthernet 10Gb 2-port RoCE - copper twinax MellanoxEthernet 10Gb 2-port RoCE - optical SR MellanoxEthernet 10Gb 2-port OpenOnLoad SolarFlare SFP+ copper twinax SolarflareEthernet 10Gb 2-port SolarFlare SFP+ copper twinax SolarflareEthernet 10Gb 2-port OpenOnLoad SolarFlare SFP+ copper twinax SolarflareEthernet 10Gb 2-port SolarFlare SFP+ copper twinax SolarflareEthernet 10Gb 2-port RoCE - optical SR MellanoxSAS 0GB Cache, RAID5/6 - 4-port, 6Gb IBM designed, manufactured specifically for IBM SAS 12GB Cache, RAID5/6 - 4-port, 6Gb IBM designed, manufactured specifically for IBM SAS 0GB Cache, LTO-5/6 tape - 4-port, 6Gb IBM designed, manufactured specifically for IBM Fibre Channel 4Gb 2-port EmulexSAS 0GB Cache, no RAID5/6 2-port 3Gb IBM designed, manufactured specifically for IBM Ethernet 1Gb 4-port - TX / UTP / copper BroadcomSAS 0GB Cache, RAID5/6 - 2-port, 6Gb IBM designed, manufactured specifically for IBM Ethernet 1Gb 2-port - TX / UTP / copper IntelFibre Channel 8Gb 2-port EmulexEthernet 10Gb 2-port - optical SR EmulexEthernet 10Gb 2-port RoCE - optical SR Mellanox

EmulexEthernet 10Gb 2-port OpenOnLoad SolarFlare SFP+ copper twinax

CNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb optical SR & 2x1Gb UTP copper RJ45

Page 62: Power PCIe Adapters 20141211

Ethernet 10Gb 2-port SolarFlare SFP+ copper twinax SolarflareFibre Channel 16Gb 2-port EmulexFibre Channel 16Gb 2-port EmulexCNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb optical SR & 2x1Gb UTP coEmulexCNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb optical SR & 2x1Gb UTP coEmulexFibre Channel 8Gb 4-port QlogicWAN 1 comm port Bisync IBM designed, manufactured specifically for IBMWAN CIM 1 comm ports Bisync IBM designed, manufactured specifically for IBMFlash memory adapter 900GB IBM designed, manufactured specifically for IBMSAS 0GB Cache, RAID5/6 - 2-port, 6Gb SSD only IBM designed, manufactured specifically for IBMSAS 0GB Cache, RAID5/6 - 2-port, 6Gb SSD only IBM designed, manufactured specifically for IBMSAS 1.8GB Cache, RAID5/6 - 3-port, 6Gb IBM designed, manufactured specifically for IBM

Page 63: Power PCIe Adapters 20141211

Manufacturer of the PCIe cardIBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM

IBM designed, manufactured specifically for IBM

Power Systems obtains many PCIe adapters from nonIBM suppliers which leverage technology already created by the PCIe supplier. Sometimes IBM works extensively with PCIe supplier on special functions/capabilities to interface into Power Systems and to comply with standards/structures/interfaces established by Power Systems. Sometimes it is a more modest amount of changes which are made to fit into the Power Systems infrastructure. Very rarely is an adapter brought out by IBM Power Systems with no changes from what the

For AIX, IBM i, and VIOS ... For adapters ordered as a Power System feature code, it is IBM who ships the underlying drivers and and it is IBM who provides support of the hardware/software combination. IBM development may work with the PCIe supplier and leverage base supplier driver logic, but IBM is the single point of contact for all support issues from a client perspective. If the problem appears to be related to the PCIe adapter hardware design or manufacturing or by underlying supplier-provided driver logic, IBM support

For Linux... For adapters ordered as a Power System feature code, the drivers used are the generally available, open source drivers for Linux. IBM works with Linux distributors and adapter vendors to help ensure testing of the hardware/software has been done. Support of the hardware/software combination is provided by the organization selected/contracted

to a client However, occassionlly it is useful to know who IBM worked with to provide PCIe

Page 64: Power PCIe Adapters 20141211

IBM designed, manufactured specifically for IBM

IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM

IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM IBM designed, manufactured specifically for IBM

IBM designed, manufactured specifically for IBM

IBM designed, manufactured specifically for IBM

Page 65: Power PCIe Adapters 20141211

IBM designed, manufactured specifically for IBMIBM designed, manufactured specifically for IBMIBM designed, manufactured specifically for IBMIBM designed, manufactured specifically for IBMIBM designed, manufactured specifically for IBMIBM designed, manufactured specifically for IBM