The MVAPICH2 Project: Latest Developments and Plans ...

54
The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Presentation at Mellanox Theatre (SC ‘16) by

Transcript of The MVAPICH2 Project: Latest Developments and Plans ...

Page 1: The MVAPICH2 Project: Latest Developments and Plans ...

The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing

Dhabaleswar K. (DK) Panda

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

Presentation at Mellanox Theatre (SC ‘16)

by

Page 2: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Drivers of Modern HPC Cluster Architectures

Tianhe – 2 Titan Stampede Tianhe – 1A

• Multi-core/many-core technologies

• Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)

• Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD

• Accelerators (NVIDIA GPGPUs and Intel Xeon Phi)

Accelerators / Coprocessors high compute density, high

performance/watt>1 TFlop DP on a chip

High Performance Interconnects -InfiniBand

<1usec latency, 100Gbps Bandwidth>Multi-core Processors SSD, NVMe-SSD, NVRAM

Page 3: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory Memory

Logical shared memory

Shared Memory Model

SHMEM, DSM

Distributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems

– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving

importance3

Page 4: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Designing Communication Libraries for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and OmniPath)

Multi/Many-coreArchitectures

Accelerators(NVIDIA and MIC)

Middleware

Co-Design

Opportunities

and

Challenges

across Various

Layers

Performance

Scalability

Fault-

Resilience

Communication Library or Runtime for Programming Models

Point-to-point

Communication

Collective

Communication

Energy-

Awareness

Synchronization

and Locks

I/O and

File Systems

Fault

Tolerance

4

Page 5: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Designing (MPI+X) at Exascale• Scalability for million to billion processors

– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)

• Scalable Collective communication– Offload

– Non-blocking

– Topology-aware

• Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

– Multiple end-points per node

• Support for efficient multi-threading

• Integrated Support for GPGPUs and Accelerators

• Fault-tolerance/resiliency

• QoS support for communication and I/O

• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, CAF, …)

• Virtualization

• Energy-Awareness

5

Page 6: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,690 organizations in 83 countries

– More than 402,000 (> 0.4 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘16 ranking)

• 1st ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China

• 13th ranked 241,108-core cluster (Pleiades) at NASA

• 17th ranked 519,640-core cluster (Stampede) at TACC

• 40th ranked 76,032-core cluster (Tsubame 2.5) at Tokyo Institute of Technology and many others

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decade

– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->

Sunway TaihuLight at NSC, Wuxi, China (1st in Nov’16, 10,649,640 cores, 93 PFlops)

Page 7: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

0

50000

100000

150000

200000

250000

300000

350000

400000

450000

Sep

-04

Jan

-05

May

-05

Sep

-05

Jan

-06

May

-06

Sep

-06

Jan

-07

May

-07

Sep

-07

Jan

-08

May

-08

Sep

-08

Jan

-09

May

-09

Sep

-09

Jan

-10

May

-10

Sep

-10

Jan

-11

May

-11

Sep

-11

Jan

-12

May

-12

Sep

-12

Jan

-13

May

-13

Sep

-13

Jan

-14

May

-14

Sep

-14

Jan

-15

May

-15

Sep

-15

Jan

-16

May

-16

Nu

mb

er o

f D

ow

nlo

ads

Timeline

MV

0.9

.4

MV

2 0.

9.0

MV

2 0.

9.8

MV

21.

0

MV

1.0

MV

21.

0.3

MV

1.1

MV

21.

4

MV

21.

5

MV

21.

6

MV

21.

7

MV

21.

8

MV

21.

9 MV

22.

1

MV

2-G

DR

2.0

b

MV

2-M

IC 2

.0

MV

2-V

irt

2.1r

c2

MV

2-G

DR

2.2

rc1

MV

2-X 2.

MV

2

2

MVAPICH/MVAPICH2 Release Timeline and Downloads

Page 8: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Architecture

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication Runtime

Diverse APIs and Mechanisms

Point-to-

point

Primitives

Collectives

Algorithms

Energy-

Awareness

Remote

Memory

Access

I/O and

File Systems

Fault

ToleranceVirtualization

Active

MessagesJob Startup

Introspection

& Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, OmniPath)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi (MIC, KNL*), NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODP*SR-

IOV

Multi

Rail

Transport Mechanisms

Shared

MemoryCMA IVSHMEM

Modern Features

MCDRAM* NVLink* CAPI*

* Upcoming

Page 9: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Software Family High-Performance Parallel Programming Libraries

MVAPICH2 Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE

MVAPICH2-X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime

MVAPICH2-GDR Optimized MPI for clusters with NVIDIA GPUs

MVAPICH2-Virt High-performance and scalable MPI for hypervisor and container based HPC cloud

MVAPICH2-EA Energy aware and High-performance MPI

MVAPICH2-MIC Optimized MPI for clusters with Intel KNC

Microbenchmarks

OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration

OEMT Utility to measure the energy consumption of MPI applications

Page 10: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, iWARP and RoCE

– Basic GPU (CUDA-aware MPI) support

• MVAPICH2-X

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– with and without Open Stack

• MVAPICH2-EA– Energy Efficient Support for point-to-point and collective operations

– Compatible with OSU Energy Monitoring Tool (OEMT-0.8)

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM

– InfiniBand Network Analysis and Monitoring Tool

• MVAPIC2-MIC (KNL)– Optimized for IB clusters with Intel Xeon Phis

• MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 11:00am)

10

Page 11: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Released on 09/09/2016

• Major Features and Enhancements

– Support and optimization for OpenPower, KNL, Omni-Path , PSM2, Mellanox EDR

– Enhanced performance for MPI_Comm_split through new bitonic algorithm

– Enable graceful fallback to Shared Memory if LiMIC2 or CMA transfer fails

– Enable support for multiple MPI initializations

– Remove verbs dependency when building the PSM and PSM2 channels

– Allow processes to request MPI_THREAD_MULTIPLE when socket or NUMA node level affinity is specified

– Collective tuning for Opal@LLNL, Bridges@PSC, and Stampede-1.5@TACC

– Tuning and architecture detection for Intel Broadwell processors

– Warn user to reconfigure library if rank type is not large enough to represent all ranks in job

– Unify process affinity support in Gen2, PSM and PSM2 channels

MVAPICH2 2.2 GA

Page 12: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

One-way Latency: MPI over IB with MVAPICH2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8 Small Message Latency

Message Size (bytes)

Late

ncy

(u

s)

1.111.19

1.011.15

1.04

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

0

20

40

60

80

100

120TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-4-EDROmni-Path

Large Message Latency

Message Size (bytes)

Late

ncy

(u

s)

Page 13: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

Bandwidth: MPI over IB with MVAPICH2

0

5000

10000

15000

20000

25000TrueScale-QDR

ConnectX-3-FDR

ConnectIB-DualFDR

ConnectX-4-EDR

Omni-Path

Bidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/s

ec)

Message Size (bytes)

21,227

12,161

21,983

6,228

24,136

0

2000

4000

6000

8000

10000

12000

14000 Unidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/s

ec)

Message Size (bytes)

12,590

3,373

6,356

12,083

12,366

Page 14: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Near-constant MPI and OpenSHMEM

initialization time at any process count

• 10x and 30x improvement in startup time

of MPI and OpenSHMEM respectively at

16,384 processes

• Memory consumption reduced for

remote endpoint information by

O(processes per node)

• 1GB Memory saved per node with 1M

processes and 16 processes per node

Towards High Performance and Scalable Startup at Exascale

P M

O

Job Startup Performance

Mem

ory

Req

uir

ed to

Sto

re

End

po

int

Info

rmat

ion

a b c d

eP

M

PGAS – State of the art

MPI – State of the art

O PGAS/MPI – Optimized

PMIX_Ring

PMIX_Ibarrier

PMIX_Iallgather

Shmem based PMI

b

c

d

e

aOn-demand Connection

On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K

Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15)

PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st

European MPI Users' Group Meeting (EuroMPI/Asia ’14)

Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th

IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15)

SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda, 16th

IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16)

a

b

c d

e

Page 15: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

0

10

20

30

40

50

16 64 256 1K 4K 16KTim

e Ta

ken

fo

r M

PI_

Init

(s

eco

nd

s)

Number of Processes

Hello World - MV2-2.0 + DefaultSLURM

MPI_Init - MV2-2.0+ Default SLURM

• Address exchange over PMI is the major

bottleneck in job startup

• Non-blocking PMI exchange hides this

cost by overlapping it with application

initialization and computation

• New PMI operation PMIX_Allgather for

improved symmetric data transfer

• Near-constant MPI_Init at any scale

• MPI_Init is 59 times faster at 8,192

processes (512 nodes)

• Hello World (MPI_Init + MPI_Finalize)

takes 5.7 times less time at 8,192

processes

Non-blocking Process Management Interface (PMI) Primitives for Scalable MPI Startup

Available since MVAPICH2-2.1 and as patch for SLURM-15.08.8 and SLURM-16.05.1

More Details in Student Research Poster Presentation (5:15-7:00pm today)

Page 16: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

● Enhance existing support for MPI_T in MVAPICH2 to expose a richer

set of performance and control variables

● Get and display MPI Performance Variables (PVARs) made available by

the runtime in TAU

● Control the runtime’s behavior via MPI Control Variables (CVARs)

● Introduced support for new MPI_T based CVARs to MVAPICH2

○ MPIR_CVAR_MAX_INLINE_MSG_SZ,

MPIR_CVAR_VBUF_POOL_SIZE,

MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE

● TAU enhanced with support for setting MPI_T CVARs in a non-

interactive mode for uninstrumented applications

Performance Engineering Applications using MVAPICH2 and TAU

VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf

Page 17: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Dynamic and Adaptive Tag Matching

Normalized Total Tag Matching Time at 512 ProcessesNormalized to Default (Lower is Better)

Normalized Memory Overhead per Process at 512 ProcessesCompared to Default (Lower is Better)

Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]

Ch

alle

nge

Tag matching is a significant overhead for receivers

Existing Solutions are

- Static and do not adapt dynamically to communication pattern

- Do not consider memory overhead

Solu

tio

n A new tag matching design

- Dynamically adapt to communication patterns

- Use different strategies for different ranks

- Decisions are based on the number of request object that must be traversed before hitting on the required one

Res

ult

s Better performance than other state-of-the art tag-matching schemes

Minimum memory consumption

Will be available in future MVAPICH2 releases

Page 18: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, iWARP and RoCE

– Basic GPU (CUDA-aware MPI) support

• MVAPICH2-X

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– with and without Open Stack

• MVAPICH2-EA– Energy Efficient Support for point-to-point and collective operations

– Compatible with OSU Energy Monitoring Tool (OEMT-0.8)

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM

– InfiniBand Network Analysis and Monitoring Tool

• MVAPIC2-MIC (KNL)– Optimized for IB clusters with Intel Xeon Phis

• MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 11:00am)

18

Page 19: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Released on 09/09/2016

• Major Features and Enhancements

– MPI Features

• Based on MVAPICH2 2.2 GA (OFA-IB-CH3 interface)

• Efficient support for Unified Memory Registration (UMR) and On Demand Paging (ODP) feature of Mellanox for point-to-point and RMA

operations

• Support for Intel Knights Landing and OpenPower architectures

– UPC Features

• Support for Intel Knights Landing and OpenPower architectures

– UPC++ Features

• Support for Intel Knights Landing and OpenPower architectures

– OpenSHMEM Features

• Support for Intel Knights Landing and OpenPower architectures

– CAF Features

• Support for Intel Knights Landing and OpenPower architectures

– Hybrid Program Features

• Support Intel Knights Landing and OpenPower architectures for hybrid MPI+PGAS applications

– Unified Runtime Features

• Based on MVAPICH2 2.2 GA (OFA-IB-CH3 interface). All the runtime features enabled by default in OFA-IB-CH3 and OFA-IB-RoCE

interface of MVAPICH2 2.2 GA are available in MVAPICH2-X 2.2 GA

MVAPICH2-X 2.2 GA

Page 20: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2-X for Hybrid MPI + PGAS Applications

• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed

– Consumes more network resource

• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF

– Available with since 2012 (starting with MVAPICH2-X 1.9)

– http://mvapich.cse.ohio-state.edu

Page 21: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Minimizing Memory Footprint by Direct Connect (DC) Transport

No

de

0 P1

P0

Node 1

P3

P2

Node 3

P7

P6

No

de

2 P5

P4

IBNetwork

• Constant connection cost (One QP for any peer)

• Full Feature Set (RDMA, Atomics etc)

• Separate objects for send (DC Initiator) and receive (DC Target)

– DC Target identified by “DCT Number”

– Messages routed with (DCT Number, LID)

– Requires same “DC Key” to enable communication

• Available since MVAPICH2-X 2.2a

0

0.5

1

160 320 620

No

rmal

ized

Exe

cuti

on

Ti

me

Number of Processes

NAMD - Apoa1: Large data set

RC DC-Pool UD XRC

1022

4797

1 1 12

10 10 10 10

1 1

35

1

10

100

80 160 320 640

Co

nn

ecti

on

Mem

ory

(K

B)

Number of Processes

Memory Footprint for Alltoall

RC DC-Pool UD XRC

H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT)

of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)

Page 22: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Introduced by Mellanox to support direct local and remote noncontiguous memory

access

• Avoid packing at sender and unpacking at receiver

• Available in MVAPICH2-X 2.2b

User-mode Memory Registration (UMR)

050

100150200250300350

4K 16K 64K 256K 1M

Late

ncy

(u

s)

Message Size (Bytes)

Small & Medium Message LatencyUMR

Default

0

5000

10000

15000

20000

2M 4M 8M 16M

Late

ncy

(u

s)

Message Size (Bytes)

Large Message LatencyUMR

Default

Connect-IB (54 Gbps): 2.8 GHz Dual Ten-core (IvyBridge) Intel PCI Gen3 with Mellanox IB FDR switch

M. Li, H. Subramoni, K. Hamidouche, X. Lu and D. K. Panda, “High Performance MPI Datatype Support with User-mode

Memory Registration: Challenges, Designs and Benefits”, CLUSTER, 2015

22

Page 23: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Applications no longer need to pin down underlying physical pages

• Memory Region (MR) are NEVER pinned by the OS

• Paged in by the HCA when needed

• Paged out by the OS when reclaimed

• ODP can be divided into two classes

– Explicit ODP

• Applications still register memory buffers for

communication, but this operation is used to define access

control for IO rather than pin-down the pages

– Implicit ODP

• Applications are provided with a special memory key that

represents their complete address space, does not need to

register any virtual address range

• Advantages

• Simplifies programming

• Unlimited MR sizes

• Physical memory optimization

On-Demand Paging (ODP)

1

10

100

1000

Exe

cuti

on

Tim

e(s

)

Applications (64 Processes)

Pin-down

ODP

M. Li, K. Hamidouche, X. Lu, H. Subramoni, J. Zhang, and D. K. Panda,

“Designing MPI Library with On-Demand Paging (ODP) of InfiniBand:

Challenges and Benefits”, SC 2016.

Wednesday 11/16/2016 @ 11:00 – 11:30 AM in Room 355-D

Page 24: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

UPC++ Support in MVAPICH2-X

MPI + {UPC++} Application

GASNet Interfaces

UPC++ Interface

Network

Conduit (MPI)

MVAPICH2-XUnified Communication

Runtime (UCR)

MPI + {UPC++} Application

UPC++ Runtime

MPI Interfaces

• Full and native support for hybrid MPI + UPC++ applications

• Better performance compared to IBV and MPI conduits

• OSU Micro-benchmarks (OMB) support for UPC++

• Available since MVAPICH2-X (2.2rc1)

0

5000

10000

15000

20000

25000

30000

35000

40000

Tim

e (u

s)

Message Size (bytes)

GASNet_MPI

GASNET_IBV

MV2-X

14x

Inter-node Broadcast (64 nodes 1:ppn)

MPI Runtime

Page 25: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Hybrid (MPI+PGAS) Programming

• Application sub-kernels can be re-written in

MPI/PGAS based on communication

characteristics

• Benefits:

– Best of Distributed Computing Model

– Best of Shared Memory Computing Model

Kernel 1MPI

Kernel 2MPI

Kernel 3MPI

Kernel NMPI

HPC Application

Kernel 2PGAS

Kernel NPGAS

Page 26: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Application Level Performance with Graph500 and SortGraph500 Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,

International Supercomputing Conference (ISC’13), June 2013

J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,

Int'l Conference on Parallel Processing (ICPP '12), September 2012

05

101520253035

4K 8K 16K

Tim

e (

s)

No. of Processes

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid (MPI+OpenSHMEM)13X

7.6X

• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design

• 8,192 processes- 2.4X improvement over MPI-CSR

- 7.6X improvement over MPI-Simple

• 16,384 processes- 1.5X improvement over MPI-CSR

- 13X improvement over MPI-Simple

J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned

Global Address Space Programming Models (PGAS '13), October 2013.

Sort Execution Time

0

500

1000

1500

2000

2500

3000

500GB-512 1TB-1K 2TB-2K 4TB-4K

Tim

e (

seco

nd

s)

Input Data - No. of Processes

MPI Hybrid

51%

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application

• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min

- Hybrid – 1172 sec; 0.36 TB/min

- 51% improvement over MPI-design

Page 27: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, iWARP and RoCE

– Basic GPU (CUDA-aware MPI) support

• MVAPICH2-X

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– with and without Open Stack

• MVAPICH2-EA– Energy Efficient Support for point-to-point and collective operations

– Compatible with OSU Energy Monitoring Tool (OEMT-0.8)

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM

– InfiniBand Network Analysis and Monitoring Tool

• MVAPIC2-MIC (KNL)– Optimized for IB clusters with Intel Xeon Phis

• MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 11:00am)

27

Page 28: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Virtualization has many benefits– Fault-tolerance

– Job migration

– Compaction

• Have not been very popular in HPC due to overhead associated with Virtualization

• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field

• Enhanced MVAPICH2 support for SR-IOV

• MVAPICH2-Virt 2.1 (with and without OpenStack) is publicly available

Can HPC and Virtualization be Combined?

J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand

Clusters? EuroPar'14

J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14

J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds,

CCGrid’15

Page 29: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

NSF Chameleon Cloud: A Powerful and Flexible Experimental Instrument

• Large-scale instrument– Targeting Big Data, Big Compute, Big Instrument research

– ~650 nodes (~14,500 cores), 5 PB disk over two sites, 2 sites connected with 100G network

• Reconfigurable instrument– Bare metal reconfiguration, operated as single instrument, graduated approach for ease-of-use

• Connected instrument– Workload and Trace Archive

– Partnerships with production clouds: CERN, OSDC, Rackspace, Google, and others

– Partnerships with users

• Complementary instrument– Complementing GENI, Grid’5000, and other testbeds

• Sustainable instrument– Industry connections http://www.chameleoncloud.org/

Page 30: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Redesign MVAPICH2 to make it virtual

machine aware

– SR-IOV shows near to native performance for

inter-node point to point communication

– IVSHMEM offers shared memory based data

access across co-resident VMs

– Locality Detector: maintains the locality

information of co-resident virtual

machines

– Communication Coordinator: selects the

communication channel (SR-IOV,

IVSHMEM) adaptively

Overview of MVAPICH2-Virt with SR-IOV and IVSHMEM

Host Environment

Guest 1

Hypervisor PF Driver

Infiniband Adapter

Physical Function

user space

kernel space

MPI proc

PCI Device

VF Driver

Guest 2

user space

kernel space

MPI proc

PCI Device

VF Driver

Virtual

Function

Virtual

Function

/dev/shm/

IV-SHM

IV-Shmem Channel

SR-IOV Channel

J. Zhang, X. Lu, J. Jose, R. Shi, D. K. Panda. Can Inter-VM

Shmem Benefit MPI Applications on SR-IOV based

Virtualized InfiniBand Clusters? Euro-Par, 2014

J. Zhang, X. Lu, J. Jose, R. Shi, M. Li, D. K. Panda. High

Performance MPI Library over SR-IOV Enabled InfiniBand

Clusters. HiPC, 2014

Page 31: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• OpenStack is one of the most popular open-

source solutions to build clouds and

manage virtual machines

• Deployment with OpenStack

– Supporting SR-IOV configuration

– Supporting IVSHMEM configuration

– Virtual Machine aware design of

MVAPICH2 with SR-IOV

• An efficient approach to build HPC Clouds

with MVAPICH2-Virt and OpenStack

MVAPICH2-Virt with SR-IOV and IVSHMEM over OpenStack

J. Zhang, X. Lu, M. Arnold, D. K. Panda. MVAPICH2 over OpenStack with SR-IOV: An Efficient Approach to Build

HPC Clouds. CCGrid, 2015

Page 32: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

0

50

100

150

200

250

300

350

400

milc leslie3d pop2 GAPgeofem zeusmp2 lu

Exe

cuti

on

Tim

e (s

)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native

1%9.5%

0

1000

2000

3000

4000

5000

6000

22,20 24,10 24,16 24,20 26,10 26,16

Exe

cuti

on

Tim

e (m

s)

Problem Size (Scale, Edgefactor)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native2%

• 32 VMs, 6 Core/VM

• Compared to Native, 2-5% overhead for Graph500 with 128 Procs

• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs

Application-Level Performance on Chameleon

SPEC MPI2007Graph500

5%

Page 33: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

0

500

1000

1500

2000

2500

3000

3500

4000

22, 16 22, 20 24, 16 24, 20 26, 16 26, 20

Exe

cuti

on

Tim

e (

ms)

Problem Size (Scale, Edgefactor)

Container-Def

Container-Opt

Native

0

10

20

30

40

50

60

70

80

90

100

MG.D FT.D EP.D LU.D CG.D

Exe

cuti

on

Tim

e (

s)

Container-Def

Container-Opt

Native

• 64 Containers across 16 nodes, pining 4 Cores per Container

• Compared to Container-Def, up to 11% and 16% of execution time reduction for NAS and Graph 500

• Compared to Native, less than 9 % and 4% overhead for NAS and Graph 500

• Optimized Container support will be available with the upcoming release of MVAPICH2-Virt

Containers Support: Application-Level Performance on Chameleon

Graph 500NAS

11%

16%

Page 34: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MPI Complex Appliances based on MVAPICH2 on Chameleon

• Through these available appliances, users and researchers can easily deploy HPC clouds to perform experiments and run jobs with

– High-Performance SR-IOV + InfiniBand

– High-Performance MVAPICH2 Library with Virtualization Support

– High-Performance Hadoop with RDMA-based Enhancements Support

Page 35: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, iWARP and RoCE

– Basic GPU (CUDA-aware MPI) support

• MVAPICH2-X

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– with and without Open Stack

• MVAPICH2-EA– Energy Efficient Support for point-to-point and collective operations

– Compatible with OSU Energy Monitoring Tool (OEMT-0.8)

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM

– InfiniBand Network Analysis and Monitoring Tool

• MVAPIC2-MIC (KNL)– Optimized for IB clusters with Intel Xeon Phis

• MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 11:00am)

35

Page 36: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Designing Energy-Aware (EA) MPI Runtime

Energy Spent in Communication

Routines

Energy Spent in Computation

Routines

Overall application

Energy Expenditure

Point-to-point

Routines

Collective

RoutinesRMA Routines

MVAPICH2-EA Designs

MPI Two-sided and collectives (ex: MVAPICH2)

Other PGAS Implementations (ex: OSHMPI)One-sided runtimes (ex: ComEx)

Impact MPI-3 RMA Implementations (ex: MVAPICH2)

Page 37: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• An energy efficient runtime that

provides energy savings without

application knowledge

• Uses automatically and

transparently the best energy

lever

• Provides guarantees on maximum

degradation with 5-41% savings at

<= 5% degradation

• Pessimistic MPI applies energy

reduction lever to each MPI call

• Released (together with OEMT)

since Aug’15

MVAPICH2-EA: Application Oblivious Energy-Aware-MPI (EAM)

A Case for Application-Oblivious Energy-Efficient MPI Runtime A. Venkatesh, A. Vishnu, K. Hamidouche, N. Tallent, D.

K. Panda, D. Kerbyson, and A. Hoise, Supercomputing ‘15, Nov 2015 [Best Student Paper Finalist]

1

Page 38: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MPI-3 RMA Energy Savings with Proxy-Applications

0

10

20

30

40

50

60

512 256 128

Sec

ond

s

#Processes

Graph500 (Execution Time)

optimistic

pessimistic

EAM-RMA

0

50000

100000

150000

200000

250000

300000

350000

512 256 128

Joule

s

#Processes

Graph500 (Energy Usage)

optimistic

pessimistic

EAM-RMA46%

• MPI_Win_fence dominates application execution time in graph500

• Between 128 and 512 processes, EAM-RMA yields between 31% and 46% savings with no

degradation in execution time in comparison with the default optimistic MPI runtime

• Solutions to be available in future releases of MVAPICH2-EA

Page 39: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, iWARP and RoCE

– Basic GPU (CUDA-aware MPI) support

• MVAPICH2-X

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– with and without Open Stack

• MVAPICH2-EA– Energy Efficient Support for point-to-point and collective operations

– Compatible with OSU Energy Monitoring Tool (OEMT-0.8)

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM

– InfiniBand Network Analysis and Monitoring Tool

• MVAPIC2-MIC (KNL)– Optimized for IB clusters with Intel Xeon Phis

• MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 11:00am)

39

Page 40: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• Available since 2004

• Suite of microbenchmarks to study communication performance of various programming

models

• Benchmarks available for the following programming models

– Message Passing Interface (MPI)

– Partitioned Global Address Space (PGAS)

• Unified Parallel C (UPC)

• Unified Parallel C++ (UPC++)

• OpenSHMEM

• Benchmarks available for multiple accelerator based architectures

– Compute Unified Device Architecture (CUDA)

– OpenACC Application Program Interface

• Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks

• Please visit the following link for more information

– http://mvapich.cse.ohio-state.edu/benchmarks/

OSU Microbenchmarks

Page 41: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, iWARP and RoCE

– Basic GPU (CUDA-aware MPI) support

• MVAPICH2-X

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– with and without Open Stack

• MVAPICH2-EA– Energy Efficient Support for point-to-point and collective operations

– Compatible with OSU Energy Monitoring Tool (OEMT-0.8)

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM

– InfiniBand Network Analysis and Monitoring Tool

• MVAPIC2-MIC (KNL)– Optimized for IB clusters with Intel Xeon Phis

• MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 11:00am)

41

Page 42: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the

MPI runtime

– http://mvapich.cse.ohio-state.edu/tools/osu-inam/

• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes

• OSU INAM v0.9.1 released on 05/13/2016

• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes

• Improve database insert times by using 'bulk inserts‘

• Capability to look up list of nodes communicating through a network link

• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with

MVAPICH2-X 2.2rc1

• “Best practices “ guidelines for deploying OSU INAM on different clusters

• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication

– Point-to-Point, Collectives and RMA

• Ability to filter data based on type of counters using “drop down” list

• Remotely monitor various metrics of MPI processes at user specified granularity

• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X

• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes

Page 43: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

OSU INAM Features

• Show network topology of large clusters

• Visualize traffic pattern on different links

• Quickly identify congested links/links in error state

• See the history unfold – play back historical state of the network

Comet@SDSC --- Clustered View

(1,879 nodes, 212 switches, 4,377 network links)

Finding Routes Between Nodes

Page 44: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

OSU INAM Features (Cont.)

Visualizing a Job (5 Nodes)

• Job level view• Show different network metrics (load, error, etc.) for any live job

• Play back historical data for completed jobs to identify bottlenecks

• Node level view - details per process or per node• CPU utilization for each rank/node

• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)

• Network metrics (e.g. XmitDiscard, RcvError) per rank/node

Estimated Process Level Link Utilization

• Estimated Link Utilization view• Classify data flowing over a network link at

different granularity in conjunction with

MVAPICH2-X 2.2rc1

• Job level and

• Process level

Page 45: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, iWARP and RoCE

– Basic GPU (CUDA-aware MPI) support

• MVAPICH2-X

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– with and without Open Stack

• MVAPICH2-EA– Energy Efficient Support for point-to-point and collective operations

– Compatible with OSU Energy Monitoring Tool (OEMT-0.8)

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM

– InfiniBand Network Analysis and Monitoring Tool

• MVAPIC2-MIC (KNL)– Optimized for IB clusters with Intel Xeon Phis

• MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 11:00am)

45

Page 46: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2-MIC Design for Clusters with IB and MIC

• Offload Mode

• Intranode Communication

• Coprocessor-only Mode

• Symmetric Mode

• Internode Communication

• Coprocessors-only

• Symmetric Mode

• Multi-MIC Node Configurations

Page 47: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 for Omni-Path and KNL

• MVAPICH2 has been supporting QLogic/PSM for many years with all different compute platforms

• Latest version supports

• Omni-Path (derivative of QLogic IB)

• Xeon family with Omni-Path

• KNL with Omni-Path

Page 48: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• On-load approach

– Takes advantage of the idle cores

– Dynamically configurable

– Takes advantage of highly multithreaded cores

– Takes advantage of MCDRAM of KNL processors

• Applicable to other programming models such as PGAS, Task-based, etc.

• Provides portability, performance, and applicability to runtime as well as

applications in a transparent manner

Enhanced Designs for KNL: MVAPICH2 Approach

Page 49: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Performance Benefits of the Enhanced Designs

• New designs to exploit high concurrency and MCDRAM of KNL

• Significant improvements for large message sizes

• Benefits seen in varying message size as well as varying MPI processes

Very Large Message Bi-directional Bandwidth16-process Intra-node All-to-AllIntra-node Broadcast with 64MB Message

0

2000

4000

6000

8000

10000

2M 4M 8M 16M 32M 64M

Ban

dw

idth

(M

B/s

)

Message size

MVAPICH2 MVAPICH2-Optimized

0

10000

20000

30000

40000

50000

60000

4 8 16

Late

ncy

(u

s)

No. of processes

MVAPICH2 MVAPICH2-Optimized

27%

0

50000

100000

150000

200000

250000

300000

1M 2M 4M 8M 16M 32M

Late

ncy

(u

s)

Message size

MVAPICH2 MVAPICH2-Optimized17.2%

52%

Page 50: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Performance Benefits of the Enhanced Designs

0

10000

20000

30000

40000

50000

60000

1M 2M 4M 8M 16M 32M 64M

Ban

dw

idth

(M

B/s

)

Message Size (bytes)

MV2_Opt_DRAM MV2_Opt_MCDRAM

MV2_Def_DRAM MV2_Def_MCDRAM 30%

0

50

100

150

200

250

300

4:268 4:204 4:64

Tim

e (s

)

MPI Processes : OMP Threads

MV2_Def_DRAM MV2_Opt_DRAM

15%

Multi-Bandwidth using 32 MPI processesCNTK: MLP Training Time using MNIST (BS:64)

• Benefits observed on training time of Multi-level Perceptron (MLP) model on MNIST dataset

using CNTK Deep Learning Framework

Enhanced Designs will be available in upcoming MVAPICH2 releases

Page 51: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

• MPI runtime has many parameters

• Tuning a set of parameters can help you to extract higher performance

• Compiled a list of such contributions through the MVAPICH Website– http://mvapich.cse.ohio-state.edu/best_practices/

• Initial list of applications– Amber

– HoomDBlue

– HPCG

– Lulesh

– MILC

– Neuron

– SMG2000

• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.

• We will link these results with credits to you.

Applications-Level Tuning: Compilation of Best Practices

Page 52: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

MVAPICH2 – Plans for Exascale

• Performance and Memory scalability toward 1M cores

• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)• MPI + Task*

• Enhanced Optimization for GPU Support and Accelerators

• Taking advantage of advanced features of Mellanox InfiniBand• Switch-IB2 SHArP*

• GID-based support*

• Enhanced communication schemes for upcoming architectures• Knights Landing with MCDRAM*

• NVLINK*

• CAPI*

• Extended topology-aware collectives

• Extended Energy-aware designs and Virtualization Support

• Extended Support for MPI Tools Interface (as in MPI 3.1)

• Extended Checkpoint-Restart and migration support with SCR

• Support for * features will be available in future MVAPICH2 Releases

Page 53: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

Two More Presentations

• Wednesday (11/16/16) at 2:30pm

High Performance Big Data (HiBD): Accelerating Hadoop, Spark and Memcached on Modern Clusters

• Thursday (11/17/16) at 11:00am

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning

Page 54: The MVAPICH2 Project: Latest Developments and Plans ...

Network Based Computing Laboratory Mellanox Theatre - SC’16

[email protected]

Thank You!

The High-Performance Big Data Projecthttp://hibd.cse.ohio-state.edu/

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

The MVAPICH2 Projecthttp://mvapich.cse.ohio-state.edu/