InfiniBand Strengthens Leadership as the Interconnect Of ...InfiniBand is the most used interconnect...

43
TOP500 Supercomputers, July 2015 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment

Transcript of InfiniBand Strengthens Leadership as the Interconnect Of ...InfiniBand is the most used interconnect...

  • TOP500 Supercomputers, July 2015

    InfiniBand Strengthens Leadership as the Interconnect

    Of Choice By Providing Best Return on Investment

  • © 2015 Mellanox Technologies 2

    TOP500 Performance Trends

    Explosive high-performance computing market growth

    Clusters continue to dominate with 87% of the TOP500 list

    Mellanox InfiniBand solutions provide the highest systems utilization in the TOP500

    for both high-performance computing and clouds

    76% CAGR

  • © 2015 Mellanox Technologies 3

    InfiniBand is the de-facto Interconnect solution for High-Performance Computing

    • Positioned to continue and expand in Cloud and Web 2.0

    InfiniBand connects the majority of the systems on the TOP500 list, with 257 systems

    • Increasing 15.8% year-over-year, from 222 system in June’14 to 257 in June’15

    FDR InfiniBand is the most used technology on the TOP500 list, connecting 156 systems

    EDR InfiniBand has entered the list with 3 systems

    InfiniBand enables the most efficient system on the list with 99.8% efficiency – record!

    InfiniBand enables the top 17 most efficient systems on the list

    InfiniBand is the most used interconnect for Petascale systems with 33 systems

    TOP500 Major Highlights

  • © 2015 Mellanox Technologies 4

    InfiniBand is the most used interconnect technology for high-performance computing • InfiniBand accelerates 257 systems, 51.4% of the list

    FDR InfiniBand connects the fastest InfiniBand systems • TACC (#8), NASA (#11), Government (#13, #14), Tulip Trading (#15), EDRC (#16), Eni (#17), AFRL (#19), LRZ (#20)

    InfiniBand connects the most powerful clusters • 33 of the Petascale-performance systems

    InfiniBand is the most used interconnect technology in each category - TOP100, TOP200, TOP300, TOP400 • Connects 50% (50 systems) of the TOP100 while Ethernet connect no systems • Connects 55% (110 systems) of the TOP200 while Ethernet only 10% (20 systems) • Connects 55% (165 systems) of the TOP300 while Ethernet only 18.7% (56 systems) • Connects 55% (220 systems) of the TOP400 while Ethernet only 23.8% (95 systems) • Connects 51.4% (257 systems) of the TOP500, Ethernet connects 29.4% (147 systems)

    InfiniBand is the interconnect of choice for accelerator-based systems • 77% of the accelerator-based systems are connected with InfiniBand

    Diverse set of applications • High-end HPC, commercial HPC, Cloud and enterprise data center

    InfiniBand in the TOP500

  • © 2015 Mellanox Technologies 5

    Mellanox EDR InfiniBand is the fastest interconnect solution on the TOP500

    • 100Gb/s throughput, 150 million messages per second, less than 0.7usec latency

    • Introduced to the list with 3 systems on the TOP500 list

    Mellanox InfiniBand is the most efficient interconnect technology on the list

    • Enables the highest system utilization on the TOP500 – 99.8% system efficiency

    • Enables the top 17 highest utilized systems on the TOP500 list

    Mellanox InfiniBand is the only Petascale-proven, standard interconnect solution

    • Connects 33 out of the 66 Petaflop-performance systems on the list (50%)

    • Connects 2X the number of Cray based systems in the Top100, 5.3X in TOP500

    Mellanox’s end-to-end scalable solutions accelerate GPU-based systems

    • GPUDirect RDMA technology enables faster communications and higher performance

    Mellanox in the TOP500

  • © 2015 Mellanox Technologies 6

    InfiniBand is the de-facto interconnect solution for performance demanding applications

    TOP500 Interconnect Trends

  • © 2015 Mellanox Technologies 7

    TOP500 Petascale-Performance Systems

    Mellanox InfiniBand is the interconnect of choice for Petascale computing • Overall 33 system of which 26 Systems use FDR InfiniBand

  • © 2015 Mellanox Technologies 8

    TOP100: Interconnect Trends

    InfiniBand is the most used interconnect solution in the TOP100

    The natural choice for world-leading supercomputers: performance, efficiency, scalability

  • © 2015 Mellanox Technologies 9

    TOP500 InfiniBand Accelerated Systems

    Number of Mellanox FDR InfiniBand systems grew 23% from June’14 to June’15

    EDR InfiniBand entered the list with 3 systems

  • © 2015 Mellanox Technologies 10

    InfiniBand is the most used interconnect of the TOP100, 200, 300, 400 supercomputers

    Due to superior performance, scalability, efficiency and return-on-investment

    InfiniBand versus Ethernet – TOP100, 200, 300, 400, 500

  • © 2015 Mellanox Technologies 11

    TOP500 Interconnect Placement

    InfiniBand is the high performance interconnect of choice • Connects the most powerful clusters, and provides the highest system utilization

    InfiniBand is the best price/performance interconnect for HPC systems

  • © 2015 Mellanox Technologies 12

    InfiniBand’s Unsurpassed System Efficiency

    TOP500 systems listed according to their efficiency

    InfiniBand is the key element responsible for the highest system efficiency

    Mellanox delivers efficiencies of up to 99.8% with InfiniBand

    Average Efficiency

    • InfiniBand: 85%

    • Cray: 74%

    • 10GbE: 66%

    • GigE: 43%

  • © 2015 Mellanox Technologies 13

    InfiniBand connected systems’ performance demonstrate highest growth rate

    InfiniBand responsible for 2.6X the performance versus Ethernet on the TOP500 list

    TOP500 Performance Trend

  • © 2015 Mellanox Technologies 14

    InfiniBand Performance Trends

    InfiniBand-connected CPUs grew 57% from June’14 to June‘15

    InfiniBand-based system performance grew 64% from June’14 to June‘15

    Mellanox InfiniBand is the most efficient and scalable interconnect solution

    Driving factors: performance, efficiency, scalability, many-many cores

  • © 2015 Mellanox Technologies 15

    Proud to Accelerate Future DOE Leadership Systems (“CORAL”)

    “Summit” System “Sierra” System

    Paving the Road to Exascale

  • © 2015 Mellanox Technologies 16

    Entering the Era of 100Gb/s (InfiniBand, Ethernet)

    Copper (Passive, Active) Optical Cables (VCSEL) Silicon Photonics

    36 EDR (100Gb/s) Ports,

  • © 2015 Mellanox Technologies 17

    Interconnect Solutions Leadership – Software and Hardware

    Comprehensive End-to-End Interconnect Software Products

    Software and Services ICs Switches/Gateways Adapter Cards Cables/Modules Metro / WAN

    At the Speeds of 10, 25, 40, 50, 56 and 100 Gigabit per Second

    Comprehensive End-to-End InfiniBand and Ethernet Portfolio

  • © 2015 Mellanox Technologies 18

    End-to-End Interconnect Solutions for All Platforms

    Highest Performance and Scalability for

    X86, Power, GPU, ARM and FPGA-based Compute and Storage Platforms

    Smart Interconnect to Unleash The Power of All Compute Architectures

    x86 Open

    POWER GPU ARM FPGA

  • © 2015 Mellanox Technologies 19

    HPC Clouds – Performance Demands Mellanox Solutions

    San Diego Supercomputing Center “Comet” System (2015) to

    Leverage Mellanox Solutions and Technology to Build HPC Cloud

  • © 2015 Mellanox Technologies 20

    Technology Roadmap – One-Generation Lead over the Competition

    2000 2020 2010 2005

    20Gbs 40Gbs 56Gbs 100Gbs

    “Roadrunner” Mellanox Connected

    1st 3rd

    TOP500 2003 Virginia Tech (Apple)

    2015

    200Gbs

    Terascale Petascale Exascale

    Mellanox

  • © 2015 Mellanox Technologies 21

    “LENOX“, EDR InfiniBand connected system at

    the Lenovo HPC innovation center, Germany

    EDR InfiniBand provides ~25% higher system

    performance versus FDR InfiniBand on Graph500

    • At 128nodes

    Lenovo Innovation Center - EDR InfiniBand System (TOP500)

    EDR Mellanox Connected

  • © 2015 Mellanox Technologies 22

    Magic Cube II supercomputer

    #237 on the TOP500 list

    Sugon Cluster TC4600E system

    Mellanox end-to-end EDR InfiniBand

    Shanghai Supercomputer Center – EDR InfiniBand (TOP500)

    EDR Mellanox Connected

  • © 2015 Mellanox Technologies 23

    Mellanox Interconnect Advantages

    Mellanox solutions provide a proven, scalable and high performance end-to-end connectivity

    Standards-based (InfiniBand , Ethernet), supported by large eco-system

    Flexible, support all compute architectures: x86, Power, ARM, GPU, FPGA etc.

    Backward and future compatible

    Proven, most used solution for Petascale systems and overall TOP500

    Speed-Up Your Present, Protect Your Future

    Paving The Road to Exascale Computing Together

  • © 2015 Mellanox Technologies 24

    “Stampede” system

    6,000+ nodes (Dell), 462462 cores, Intel Phi co-processors

    5.2 Petaflops

    Mellanox end-to-end FDR InfiniBand

    Texas Advanced Computing Center/Univ. of Texas - #8

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 25

    Pleiades system

    SGI Altix ICE

    20K InfiniBand nodes

    3.4 sustained Petaflop performance

    Mellanox end-to-end FDR and QDR InfiniBand

    Supports variety of scientific and engineering projects • Coupled atmosphere-ocean models

    • Future space vehicle design

    • Large-scale dark matter halos and galaxy evolution

    NASA Ames Research Center - #11

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 26

    “HPC2” system

    IBM iDataPlex DX360M4

    NVIDIA K20x GPUs

    3.2 Petaflops sustained Petaflop performance

    Mellanox end-to-end FDR InfiniBand

    Exploration & Production ENI S.p.A. - #17

    Petaflop Mellanox Connected

    http://www.google.com/url?sa=i&source=images&cd=&cad=rja&uact=8&docid=T1749leZctqT6M&tbnid=A9M2bVKMSc4v_M:&ved=0CAgQjRw&url=http://www.mainjustice.com/2010/07/07/third-settlement-in-nigerian-bribery-investigation/&ei=PoikU4rbF6Xe7Abu9YC4CQ&psig=AFQjCNErwl8f3zD2EWBYC5b4R3viVo-6ug&ust=1403378110603834

  • © 2015 Mellanox Technologies 27

    IBM iDataPlex and Intel Sandy Bridge

    147456 cores

    Mellanox end-to-end FDR InfiniBand solutions

    2.9 sustained Petaflop performance

    The fastest supercomputer in Europe

    91% efficiency

    Leibniz Rechenzentrum SuperMUC - #20

    Petaflop Mellanox Connected

    http://rds.yahoo.com/_ylt=A2KJke6djfJNdnwA3fOjzbkF/SIG=12cmio2tj/EXP=1307770397/**http:/www.aei.mpg.de/~allen/TestBedWeb/PR/logos/LRZ.gif

  • © 2015 Mellanox Technologies 28

    Tokyo Institute of Technology - #22

    TSUBAME 2.0, first Petaflop system in Japan

    2.8 PF performance

    HP ProLiant SL390s G7 1400 servers

    Mellanox 40Gb/s InfiniBand

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 29

    “Cascade” system

    Atipa Visione IF442 Blade Server

    2.5 sustained Petaflop performance

    Mellanox end-to-end InfiniBand FDR

    Intel Xeon Phi 5110P accelerator

    DOE/SC/Pacific Northwest National Laboratory - #25

    Petaflop Mellanox Connected

    http://www.google.com/url?sa=i&source=images&cd=&cad=rja&uact=8&docid=LWWFpM5ppXzC1M&tbnid=2rAWhMoDjFiLbM:&ved=0CAgQjRw&url=http://tascs-scidac.org/image-gallery/logos/pnnl-color-logo-horizontal-crop-trans.png/image_view_fullscreen&ei=V4mkU8fuGu_X7Aa7roDoCg&psig=AFQjCNH0nNxoVJb1SkUP4zdR_jzNoPUopA&ust=1403378391541381

  • © 2015 Mellanox Technologies 30

    “Pangea” system

    SGI Altix X system, 110400 cores

    Mellanox FDR InfiniBand

    2 sustained Petaflop performance

    91% efficiency

    Total Exploration Production - #29

    Petaflop Mellanox Connected

    http://www.google.com/url?sa=i&source=images&cd=&cad=rja&docid=nqMjvjJ5XHbSEM&tbnid=Jpe4MYIGZMl5aM:&ved=0CAgQjRwwAA&url=http://dies52.its.ac.id/&ei=dGe8UaObMcWo0AWTh4DICg&psig=AFQjCNEUJ3RNGjUQHS6xPmvWfhZBDZNqTw&ust=1371388148844820

  • © 2015 Mellanox Technologies 31

    Occigen system

    1.6 sustained Petaflop performance

    Bull bullx DLC

    Mellanox end-to-end FDR InfiniBand

    Grand Equipement National de Calcul Intensif - Centre Informatique

    National de l'Enseignement Suprieur (GENCI-CINES) - #36

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 32

    “Spirit” system

    SGI Altix X system, 74584 cores

    Mellanox FDR InfiniBand

    1.4 sustained Petaflop performance

    92.5% efficiency

    Air Force Research Laboratory - #42

    Petaflop Mellanox Connected

    http://www.google.com/url?sa=i&source=images&cd=&cad=rja&docid=SdidR8OwpJKyvM&tbnid=6GaTwA-h_BaJEM:&ved=0CAgQjRwwADgF&url=http://franklin.chm.colostate.edu/shs/sponsors_home_page.htm&ei=jGe8UaPMLcmg0QWa4YCwCw&psig=AFQjCNH53B1InQnMtp_WILi7Inti7mEsUw&ust=1371388172797905

  • © 2015 Mellanox Technologies 33

    Bull Bullx B510, Intel Sandy Bridge

    77184 cores

    Mellanox end-to-end FDR InfiniBand solutions

    1.36 sustained Petaflop performance

    CEA/TGCC-GENCI - #44

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 34

    IBM iDataPlex DX360M4

    Mellanox end-to-end FDR InfiniBand solutions

    1.3 sustained Petaflop performance

    Max-Planck-Gesellschaft MPI/IPP - # 47

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 35

    Dawning TC3600 Blade Supercomputer

    5200 nodes, 120640 cores, NVIDIA GPUs

    Mellanox end-to-end 40Gb/s InfiniBand solutions

    • ConnectX-2 and IS5000 switches

    1.27 sustained Petaflop performance

    The first Petaflop systems in China

    National Supercomputing Centre in Shenzhen (NSCS) - #48

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 36

    “Yellowstone” system

    1.3 sustained Petaflop performance

    72,288 processor cores, 4,518 nodes (IBM)

    Mellanox end-to-end FDR InfiniBand, full fat tree, single plane

    NCAR (National Center for Atmospheric Research) - #49

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 37

    Bull Bullx B510, Intel Sandy Bridge

    70560 cores

    1.24 sustained Petaflop performance

    Mellanox end-to-end InfiniBand solutions

    International Fusion Energy Research Centre (IFERC), EU(F4E) -

    Japan Broader Approach collaboration - #51

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 38

    The “Cartesius” system - the Dutch supercomputer

    Bull Bullx DLC B710/B720 system

    Mellanox end-to-end InfiniBand solutions

    1.05 sustained Petaflop performance

    SURFsara - #59

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 39

    Commissariat a l'Energie Atomique (CEA) - #64

    Tera 100, first Petaflop system in Europe - 1.05 PF performance

    4,300 Bull S Series servers

    140,000 Intel® Xeon® 7500 processing cores

    300TB of central memory, 20PB of storage

    Mellanox 40Gb/s InfiniBand

    Petaflop Mellanox Connected

  • © 2015 Mellanox Technologies 40

    Fujitsu PRIMERGY CX250 S1

    Mellanox FDR 56Gb/s InfiniBand end-to-end

    980 Tflops performance

    Australian National University - #72

    http://www.google.com/url?sa=i&rct=j&q=&esrc=s&frm=1&source=images&cd=&cad=rja&docid=uDUa3XhmJt39UM&tbnid=a3RPp2yvuwHyfM:&ved=0CAUQjRw&url=http://www.myinternationalscholarships.com/gsia-offers-four-hedley-bull-scholarships/&ei=iz7ZUdz7F4PB0gGI4YCoBw&bvm=bv.48705608,d.dmg&psig=AFQjCNEzMFZn9sGeEV0JYBMD0-Aigkyo6g&ust=1373278210555298

  • © 2015 Mellanox Technologies 41

    “Conte” system

    HP Cluster Platform SL250s Gen8

    Intel Xeon E5-2670 8C 2.6GHz

    Intel Xeon Phi 5110P accelerator

    Total of 77,520 cores

    Mellanox FDR 56Gb/s InfiniBand end-to-end

    Mellanox Connect-IB InfiniBand adapters

    Mellanox MetroX long Haul InfiniBand solution

    980 Tflops performance

    Purdue University - #73

  • © 2015 Mellanox Technologies 42

    “MareNostrum 3” system

    1.1 Petaflops peak performance

    ~50K cores, 91% efficiency

    Mellanox FDR InfiniBand

    Barcelona Supercomputing Center - #77

  • Thank You