ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards ......connectivity. Mellanox’s scale-out FDR...

4
REFERENCE GUIDE ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards Why Mellanox? Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-proven product offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data center connectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management providing the best return-on- investment. Why FDR 56Gb/s InfiniBand? Enables the highest performance and lowest latency – Proven scalability for tens-of-thousands of nodes – Maximum return-on-investment Highest efficiency / maintains balanced system ensuring highest productivity – Provides full bandwidth for PCI 3.0 servers Proven in multi-process networking requirements – Low CPU overhead and high sever utilization Performance driven architecture MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bi- directional) MPI message rate of >90 Million/sec Superior application performance From 30% to over 100% HPC application performance increase Doubles the storage throughput, reducing backup time in half What is FDR10 InfiniBand? FDR10 InfiniBand is a Mellanox proprietary protocol similar in format to FDR but running at a speed identical to 40Gb/s Ethernet. FDR10 supports InfiniBand at true 40Gb/s line speeds and FEC while taking advantage of mid-planes, connectors, PCB materials and cables designed for 40Gb/s Ethernet. InfiniBand Market Applications InfiniBand is increasingly becomes an interconnect of choice in not just high-performance computing environments, but also in main- stream enterprise grids, data center virtualization solutions, storage, and embedded environments. The low latency and high-performance of InfiniBand coupled with the economic benefits of its consolida- tion and virtualization capabilities provides end-customers the ideal combination as they build out their applications. Why Mellanox 10/40GbE? Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches and fabric optimization software, a broader array of end-users can benefit from a more scalable and high-performance Ethernet fabric. Mellanox adapter cards are designed to drive the full performance of PCIe 2.0 and 3.0 I/O over high- speed 56Gb/s FDR, 40Gb/s FDR10 InfiniBand and 10/40GbE fabrics. ConnectX InfiniBand and Ethernet adapters lead the market in performance, throughput, power and lowest latency. ConnectX adapter cards provide the highest performing and most flexible interconnect solution for data centers, high- performance computing, Web 2.0, cloud computing, financial services and embedded environments. Key Features – 0.7us application to application latency – 40 or 56Gb/s InfiniBand ports – 10 or 40Gb/s Ethernet Ports PCI Express 3.0 (up to 8GT/s) – CPU offload of transport operations – End-to-end QoS & congestion control – Hardware-based I/O virtualization – TCP/UDP/IP stateless offload Key Advantages – World-class cluster performance – High-performance networking and storage access – Guaranteed bandwidth & low-latency services – Reliable transport – End-to-end storage integrity – I/O consolidation – Virtualization acceleration – Scales to tens-of-thousands of nodes Mellanox 40 and 56Gb/s Infiniband switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scalable switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to tens-of-thousands of nodes. Key Features – 72.5Tb/s switching capacity – 100ns to 510ns switching latency – Hardware-based routing – Congestion control – Quality of Service enforcement – Up to 6 separate subnets – Temperature sensors and voltage monitors Key Advantages – High-performance fabric for parallel computation or I/O convergence – Wirespeed InfiniBand switch platform up to 56Gb/s per port – High-bandwidth, low-latency fabric for compute-intensive applications InfiniBand and Ethernet Switches X X ® S w tch 3828RG Rev 1.0 Mellanox’s scale-out 10 and 40 Gigabit Ethernet switch products offer the industry’s highest density Ethernet switching products. Offering a full product portfolio of top-of-rack 1U Ethernet switches for 10 or 40Gb/s Ethernet ports to the server or to the next level of switching. These switches enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Key Features Up to 36 ports of 40Gb/s non-blocking Ethernet switching in 1U Up to 64 ports of 10Gb/s non-blocking Ethernet switching in 1U 230ns-250ns port to port low latency switching – Low power Key Advantages Optimal for dealing with data center east-west traffic computation or I/O convergence – Highest switching bandwidth in 1U Low OpEx and CapEx and highest ROI Dell Sales Contact: [email protected] OEM BDM Ronnie Payne 512-201-3030 [email protected] Technical Sales Rep Will Stepanov 512-966-4993 [email protected]

Transcript of ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards ......connectivity. Mellanox’s scale-out FDR...

Page 1: ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards ......connectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and 10/40GbE products enable users to benefit from a far more

REFERENCE GUIDE

ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards Why Mellanox?Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-provenproduct offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data centerconnectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management providing the best return-on-investment.

Why FDR 56Gb/s InfiniBand?Enables the highest performance and lowest latency– Proven scalability for tens-of-thousands of nodes– Maximum return-on-investmentHighest efficiency / maintains balanced system ensuring highest productivity– Provides full bandwidth for PCI 3.0 servers– Proven in multi-process networking requirements– Low CPU overhead and high sever utilizationPerformance driven architecture – MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bi-

directional)– MPI message rate of >90 Million/sec Superior application performance– From 30% to over 100% HPC application performance increase– Doubles the storage throughput, reducing backup time in half

What is FDR10 InfiniBand? FDR10 InfiniBand is a Mellanox proprietary protocol similar in format to FDR but running at a speed identical to 40Gb/s Ethernet. FDR10 supports InfiniBand at true 40Gb/s line speeds and FEC while taking advantage of mid-planes, connectors, PCB materials and cables designed for 40Gb/s Ethernet.

InfiniBand Market ApplicationsInfiniBand is increasingly becomes an interconnect of choice in not just high-performance computing environments, but also in main-stream enterprise grids, data center virtualization solutions, storage, and embedded environments. The low latency and high-performance of InfiniBand coupled with the economic benefits of its consolida-tion and virtualization capabilities provides end-customers the ideal combination as they build out their applications.

Why Mellanox 10/40GbE?Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches and fabric optimization software, a broader array of end-users can benefit from a more scalable and high-performance Ethernet fabric.

Mellanox adapter cards are designed to drive the full performance of PCIe 2.0 and 3.0 I/O over high-speed 56Gb/s FDR, 40Gb/s FDR10 InfiniBand and 10/40GbE fabrics. ConnectX InfiniBand and Ethernet adapters lead the market in performance, throughput, power and lowest latency. ConnectX adapter cards provide the highest performing and most flexible interconnect solution for data centers, high-performance computing, Web 2.0, cloud computing, financial services and embedded environments.

Key Features– 0.7us application to application latency – 40 or 56Gb/s InfiniBand ports – 10 or 40Gb/s Ethernet Ports – PCI Express 3.0 (up to 8GT/s)– CPU offload of transport operations – End-to-end QoS & congestion control – Hardware-based I/O virtualization – TCP/UDP/IP stateless offload

Key Advantages– World-class cluster performance – High-performance networking and storage access – Guaranteed bandwidth & low-latency services – Reliable transport – End-to-end storage integrity – I/O consolidation – Virtualization acceleration – Scales to tens-of-thousands of nodes

Mellanox 40 and 56Gb/s Infiniband switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scalable switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to tens-of-thousands of nodes.

Key Features– 72.5Tb/s switching capacity – 100ns to 510ns switching latency – Hardware-based routing – Congestion control – Quality of Service enforcement – Up to 6 separate subnets – Temperature sensors and voltage

monitors

Key Advantages– High-performance fabric for

parallel computation or I/O convergence

– Wirespeed InfiniBand switch platform up to 56Gb/s per port

– High-bandwidth, low-latency fabric for compute-intensive applications

InfiniBand and Ethernet Switches

XX®

Sw tch

3828RG Rev 1.0

Mellanox’s scale-out 10 and 40 Gigabit Ethernet switch products offer the industry’s highest density Ethernet switching products. Offering a full product portfolio of top-of-rack 1U Ethernet switches for 10 or 40Gb/s Ethernet ports to the server or to the next level of switching. These switches enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics.

Key Features– Up to 36 ports of 40Gb/s non-blocking Ethernet

switching in 1U– Up to 64 ports of 10Gb/s non-blocking Ethernet

switching in 1U– 230ns-250ns port to port low latency switching– Low power

Key Advantages– Optimal for dealing with data center east-west

traffic computation or I/O convergence – Highest switching bandwidth in 1U – Low OpEx and CapEx and highest ROI

Dell Sales Contact: [email protected]

OEM BDM Ronnie Payne512-201-3030

[email protected]

Technical Sales Rep Will Stepanov512-966-4993

[email protected]

Page 2: ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards ......connectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and 10/40GbE products enable users to benefit from a far more

REFERENCE GUIDE

Mellanox Product Details - S & P

Dell SKU OPN Component Description 3yr Silver Support

Adapters

ConnectX-2 VPI Adapter Cards

A3993613 MHQH19B-XTRQDR

ConnectX-2 VPI single-port QSFP, IB 40Gb/s and 10GbE, PCIe2.0

A5287970A4580449 MHQH29C-XTR ConnectX-2 VPI dual-port QSFP, IB 40Gb/s and 10GbE, PCIe2.0

A5362682 MHZH29B-XTR QDR/10GbE ConnectX-2 VPI 40Gb/s IB QSFP and 10GbE SFP+, PCIe2.0

ConnectX-3 VPI Adapter Cards

A5634521 MCX353A-QCBTQDR

ConnectX-3 VPI single-port QSFP, QDR IB (40Gb/s) and 10GbE, PCIe3.0

A5287970

A5625940 MCX354A-QCBT ConnectX-3 VPI dual-port QSFP, QDR IB (40Gb/s) and 10GbE, PCIe3.0

A5634520 MCX353A-TCBTFDR10

ConnectX®-3 VPI single-port QSFP, FDR10 IB (40Gb/s) and 10GigE, PCIe3.0

A5634523 MCX354A-TCBT ConnectX®-3 VPI dual-port QSFP, FDR10 IB (40Gb/s) and 10GigE, PCIe3.0

A5626001 MCX353A-FCBTFDR

ConnectX-3 VPI single-port QSFP, FDR IB (56Gb/s) and 40GbE, PCIe3.0

A5634524 MCX354A-FCBT ConnectX-3 VPI dual-port QSFP, FDR IB (56Gb/s) and 40GbE, PCIe3.0

ConnectX-3 Ethernet Adapter Cards

A6665465 MCX311A-XCAT10GbE

ConnectX-3 EN 10GbE, single-port SFP+, PCIe3.0

A5287970A5556990 MCX312A-XCBT ConnectX-3 EN 10GbE, dual-port SFP+, PCIe3.0

A5634484 MCX313A-BCBT40GbE

ConnectX-3 EN 40GbE, single-port QSFP, PCIe3.0

A5588831 MCX314A-BCBT ConnectX-3 EN 40GbE, dual-port QSFP, PCIe3.0

Edge Switches

A5468072 MSX6036F-1SFR

FDR

SwitchX FDR, 36 ports QSFP, Managed, 648 node subnet manager, 1U

A5379584

A6775350 MSX6018F-1SFS SwitchX™FDR, 18 ports QSFP, Managed, 648 node subnet manager, 1U

A6865654 MSX6015F-1SFS SwitchX™FDR, 18 ports QSFP, Un-Managed, 1U

A5380136 MSX6025F-1SFR SwitchX FDR, 36 ports QSFP, Un-Managed, 648 node subnet manager, 1U

A5626034 MSX6036T-1SFRFDR10

SwitchX™FDR10, 36 ports QSFP, Managed, 648 node subnet manager, 1U

A5512026 MSX6025T-1SFR SwitchX™FDR10, 36 ports QSFP, Un-Managed, 648 node subnet manager, 1U

A3693991 MIS5035Q-1SFC

QDR

InfiniScale IV QDR, 36 ports QSFP, Managed, 648 node subnet manager, 1U

A5284724A3947573 MIS5030Q-1SFC InfiniScale IV QDR, 36 ports QSFP, Managed, 108 node subnet manager, 1U

A3957097 MIS5025Q-1SFC InfiniScale IV QDR, 36 ports QSFP, Un-Managed, 1U

A4580447 MIS5023Q-1BFR InfiniScale IV QDR, 18 ports QSFP, Un-Managed, 1U A5379571

A5634507 MSX1036B-1SFR

Ethernet

SwitchX 36-port QSFP 40GbE, 1U A5379638

A6740068 MSX1024B-1BFS SwitchX®-2 48-port SFP+ 10GbE, 12 port QSFP 40GbE, 1U A6740069

A5760896 MSX1016X-2BFR SwitchX 64-port SFP+ 10GbE, 1U A5634506

A5764149 LIC-1036-L3 L3 Ethernet software license for 1036 Series Ethernet Switch A5764196

A5764185 LIC-1016-L3 L3 Ethernet software license for 1016 Series Ethernet Switch A5764152

Chassis Switches

FDR/40 GbE Chassis Switch

A6111509 MSX6512-NR

FDR/FDR10

216 ports FDR capable chassis, Non-blocking configuration needs all spines A5379621

A6300815 MSX6518-NR 324 ports FDR capable chassis, Non-blocking configuration needs all spines A5379651

A6007695 MSX6536-NR 648 ports FDR capable chassis, Non-blocking configuration needs all spines A5379647

A5063198 MSX6001FRFDR

18 ports FDR Leaf for SX65xx Chassis Switch

Support incl in base chassis switch above.

A5063199 MSX6002FLR 36 ports FDR VPI Spine forSX65xx Chassis Switch

A5380100 MSX6001TRFDR10

18 ports FDR10 Leaf for SX65xx Chassis Switch

A5380101 MSX6002TBR 36 ports 40GbE VPI Spine for SX65xx Chassis Switch

A5063200 MSX6000MAR Management module for SX65xx Chassis Switch

QDR Chassis Switch

A3993552 MIS5100Q-3DNC

QDR

108 ports QDR capable chassis. Includes 3 Spine Blades A5293344

A3993551 MIS5200Q-4DNC 216 ports QDR capable chassis. Includes 6 Spine Blades A5554819

A3993550 MIS5300Q-6DNC 324 ports QDR capable chassis. Includes 9 Spine Blades A5354481

A3993682 MIS5600Q-10DNC 648 ports QDR capable chassis. Includes 18 Spine Blades A5379588

A3993556 MIS5001QC 18 port QDR Leaf Blade for MIS5X00 Chassis Switch Support incl in base chassis switch above.A3993683 MIS5600MDC Management module for MIS5X00 Chassis Switch

InfiniBand to Ethernet Gateway Systems

A4058896 MBX5020-1SFR QDR/10GbE BridgeX IB to EN Gateway, 4 QDR ports and 12 SFP+ 1/10GbE ports, 1U A5379673

A4785818 VLT-30034 Grid Director 4036E IB to EN Gateway, 34 QDR ports with 2 1/10GbE ports, 1U A5379670

A6747366 LIC-6036-GWFDR/40GbE and/or 10GbE

L2 + L3 Ethernet + Gateway software license for Mellanox 6036 Series SwitchA6747363

Software

Unified Fabric Manager (UFM) Packages

A5362972 S_W-00137 UFM STANDARD LICENSE FOR 1 MANAGED NODE (up to 16 cores) A5307677

A5216210 S_W-00133 UFM ADVANCED LICENSE FOR 1 MANAGED NODE (up to 16 cores) A5379544

Software Host Accelerators

A4995874 SWL-00400 VMA license per server (2 CPU Sockets) A5379722

Page 3: ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards ......connectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and 10/40GbE products enable users to benefit from a far more

REFERENCE GUIDE

Mellanox Product Details - S & P

Dell SKU OPN Component Description 3yr Silver Support

Cables

Copper Cables, Passive with QSFP Connectors

A5264855 MC2206130-001

QDR/FDR10

Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 1m

A5296015

A5058556 MC2206130-002 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 2m

A5058557 MC2206130-003 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 3m

A5145715 MC2206128-004 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 4m

A5319885 MC2206128-005 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 5m

A5715909 MC2206126-006QDR

Mellanox copper cable, up to IB QDR (40Gb/s), 4X QSFP, 6m

A5601742 MC2206125-007 Mellanox copper cable, up to IB QDR (40Gb/s), 4X QSFP, 7m

A5063201 MC2207130-001

FDR

FDR InfiniBand QSFP passive copper cable, 1m

A5063202 MC2207130-002 FDR InfiniBand QSFP passive copper cable, 2m

A5063203 MC2207128-003 FDR InfiniBand QSFP passive copper cable, 3m

A5512016 MC2210130-001

40GbE

40GE QSFP passive copper cable, 1m

A5512035 MC2210130-002 40GE QSFP passive copper cable, 2m

A5512036 MC2210128-003 40GE QSFP passive copper cable, 3m

A5512037 MC2210126-004 40GE QSFP passive copper cable, 4m

A5512038 MC2210126-005 40GE QSFP passive copper cable, 5m

Fiber Cables, Active with QSFP Connectors

A6046821 MC2206310-003

QDR/FDR10

Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 3m

A5296015

A6140061 MC2206310-005 Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 5m

A6267425 MC2206310-010 Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 10m

A5379667 MC2206310-015 Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 15m

A6451117 MC2206310-020 Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 20m

A6326548 MC2206310-030 Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 30m

A6326549 MC2206310-050 Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 50m

A6267424 MC2206310-100 Mellanox active fiber cable, 4X QSFP, IB QDR/FDR10 (40Gb/s), 100m

A5512043 MC2207310-005

FDR

FDR InfiniBand QSFP assembled optical cable, 5m

A5063206 MC2207310-010 FDR InfiniBand QSFP assembled optical cable, 10m

A5512044 MC2207310-015 FDR InfiniBand QSFP assembled optical cable, 15m

A5512040 MC2207310-020 FDR InfiniBand QSFP assembled optical cable, 20m

A5512041 MC2207310-030 FDR InfiniBand QSFP assembled optical cable, 30m

A5512042 MC2207310-050 FDR InfiniBand QSFP assembled optical cable, 50m

A5634509 MC2207310-100 FDR InfiniBand QSFP assembled optical cable, 100m

A5512039 MC2210310-005

40GbE

40GE QSFP assembled optical cable, 5m

A5512045 MC2210310-010 40GE QSFP assembled optical cable, 10m

A5512046 MC2210310-015 40GE QSFP assembled optical cable, 15m

A5512047 MC2210310-020 40GE QSFP assembled optical cable, 20m

A5512048 MC2210310-030 40GE QSFP assembled optical cable, 30m

A5512049 MC2210310-050 40GE QSFP assembled optical cable, 50m

Copper Cables, Passive with SFP+ Connectors

A5379549 MC3309130-001

10GbE

Mellanox passive copper cable, 1X SFP+, 10 Gb/s, 1m

A5296015A5764163 MC3309130-002 Mellanox passive copper cable, 1X SFP+, 10 Gb/s, 2m

A5321519 MC3309130-003 Mellanox passive copper cable, 1X SFP+, 10 Gb/s, 3m

A5380148 MC3309124-005 Mellanox passive copper cable, 1X SFP+, 10 Gb/s, 5m

A5380149 MC3309124-007 Mellanox passive copper cable, 1X SFP+, 10 Gb/s, 7m

Copper Cables, Passive Hybrid, QSFP to CX4 Connectors

A5204047 MC1204128-001 DDR to QDR Hybrid QSFP to CX4

Mellanox copper cable, up to 20Gb/s, 4X microGiGaCN to QSFP, 1m

A5296015A5254783 MC1204128-002 Mellanox copper cable, up to 20Gb/s, 4X microGiGaCN to QSFP, 2m

A5196931 MC1204128-003 Mellanox copper cable, up to 20Gb/s, 4X microGiGaCN to QSFP, 3m

A5199959 MC1204128-005 Mellanox copper cable, up to 20Gb/s, 4X microGiGaCN to QSFP, 5m

Fiber Cables, Active with QSFP to CX4 (Hybrid)

A4995862 MC1204310-025 DDR CX4 4x QSFP to microGiGaCN™ latch (CX4), 25m fiber optic cable A5296015

Modules and Adapters

Optical Modules for 40Gig Ethernet or FDR InfiniBand, CX4 to QSFP

A5380146 MC2207411-SR4 FDR Mellanox FDR IB, 56Gbps QSFP optical module, MPO connector, SR4, 850nmA5296015

A5380147 MC2210411-SR4 40GbE Mellanox 40GE Ethernet, QSFP optical module, MPO connector, SR4, 850nm

Optical Modules for 10Gig Ethernet or InfiniBand

A2566073 MFM1T02A-LR10GbE

Mellanox SFP+ optical module for 10GBASE-LRA5296015

A3203095 MFM1T02A-SR Mellanox SFP+ optical module for 10GBASE-SR

A6326554 MFM4R12C-QDR 40GbE 40Gb/s InfiniBand QSFP optical module

Cable Adapters

A3993671 MAM1Q00A-QSA QSFP to SFP+ QSFP to SFP+ cable adapter A5296015

3828RG Rev 1.0

Page 4: ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards ......connectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and 10/40GbE products enable users to benefit from a far more

350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 408-970-3403www.mellanox.com

© Copyright 2013. Mellanox Technologies. All rights reserved.Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Intercon-nect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, MetroX, MLNX-OS, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respec-tive owners.

Mellanox Product Details - OEMDell SKU Mellanox OPN Compo-

nentCus Kit/FI Description

PowerEdge-C / DCS

Adapters

342-4571 MCX383A-FCNA FDR Factory Install ConnectX-3 single port FDR 56Gb/s Infiniband Mezzanine Adapter - C6220 / C8220

317-3413 MCQH29-XDR

QDR Factory Install ConnectX-2 dual port QDR 40Gb/s Infiniband Mezzanine Adapter342-2346 MCQH29-XER

430-4609 MCQH29-XFR

PowerEdge

Adapters

430-4671 MCX380A-FCAAFDR

Customer KitConnectX-3 dual port FDR 56Gb/s Blade Mezzanine Adapter

430-4669 MCX380A-FCAA Factory Install

430-4833 MCX380A-TCAAFDR10

Customer KitConnectX-3 dual port FDR10 Blade Mezzanine Adapter

430-4834 MCX380A-TCAA Factory Install

430-4672 MCX380A-QCAAQDR

Customer KitConnectX-3 dual port QDR 40Gb/s Blade Mezzanine Adapter

430-4670 MCX380A-QCAA Factory Install

430-3799 MCX380A-QCAAQDR

Customer KitConnectX-2 dual port QDR 40Gb/s Blade Mezzanine Adapter

430-3804 MCX380A-QCAA Factory Install

Switches

225-2438 M4001FFDR

Customer KitSwitchX single width FDR Infiniband 56Gb/s Blade Switch

225-2439 M4001F Factory Install

225-3702 M4001TFDR10

Customer KitSwitchX single width FDR10 Infiniband 40Gb/s Blade Switch

225-3703 M4001T Factory Install

225-2441 M4001QQDR

Customer KitSwitchX single width QDR Infiniband 40Gb/s Blade Switch

225-2442 M4001Q Factory Install

224-4640 M3601QQDR

Customer KitInfiniScale IV double width QDR Infiniband 40Gb/s Blade Switch

224-4642 M3601Q Factory Install

PTM (Pass Through Module) for Dell M1000E Blade System

331-0439

M1601P 10GbE

Customer Kit10GbE (XAUI) 16-port pass through module

Configure in Dellstar Factory Install

331-2498 Customer Kit10GbE (KR) 16-port pass through module

Configure in Dellstar Factory Install

Professional Services

A5456157 GPS-00010 Project-based on site support person per day

A5254787GPS-03003

3 days (1 person) SOW services for On-Site Network bring up, HW and SW Install and config., Fabric health check, Best practices & knowledge transfer, Travel & expense included, Cabling (2 people min).

A5254788GPS-03005

5 days (1 person) SOW services for On-Site Network bring up, HW and SW Install and config., Fabric health check, Best practices/knowledge transfer, Travel/expense included, Cabling (2 people min).

OEM BDM Ronnie Payne512-201-3030

[email protected]

Dell Sales Contact: [email protected]

3828RG Rev 1.1

1) HPC Configurator http://calc.mellanox.com/clusterconfig/2) Dell/Mellanox page http://mellanox.com/content/pages.php?pg=dell&menu_section=543) Deal registration http://mellanox.com/content/pages.php?pg=opportunity_registration

Technical Sales Rep Will Stepanov512-966-4993

[email protected]