Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing...

82
Programming Models for Exascale Systems Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Talk at HPC Advisory Council Stanford Conference and Exascale Workshop (2014) by

Transcript of Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing...

Page 1: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Programming Models for Exascale Systems

Dhabaleswar K. (DK) Panda

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

Talk at HPC Advisory Council Stanford Conference and Exascale

Workshop (2014)

by

Page 2: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Growth of High Performance Computing

(HPC)

– Growth in processor performance

• Chip density doubles every 18 months

– Growth in commodity networking

• Increase in speed/features + reducing cost

HPC Advisory Council Stanford Conference, Feb '14

Current and Next Generation HPC Systems and Applications

2

Page 3: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scientific Computing

– Message Passing Interface (MPI), including MPI + OpenMP, is the

Dominant Programming Model

– Many discussions towards Partitioned Global Address Space

(PGAS)

• UPC, OpenSHMEM, CAF, etc.

– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)

• Big Data/Enterprise/Commercial Computing

– Focuses on large data and data analysis

– Hadoop (HDFS, HBase, MapReduce) environment is gaining a lot of

momentum

– Memcached is also used for Web 2.0

3

Two Major Categories of Applications

HPC Advisory Council Stanford Conference, Feb '14

Page 4: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

High-End Computing (HEC): PetaFlop to ExaFlop

HPC Advisory Council Stanford Conference, Feb '14 4

Expected to have an ExaFlop system in 2019 -2022!

100

PFlops in

2015

1 EFlops in

2018

Page 5: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 5

Trends for Commodity Computing Clusters in the Top 500 List (http://www.top500.org)

0

10

20

30

40

50

60

70

80

90

100

0

50

100

150

200

250

300

350

400

450

500

Pe

rce

nta

ge o

f C

lust

ers

Nu

mb

er

of

Clu

ste

rs

Timeline

Percentage of Clusters

Number of Clusters

Page 6: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Drivers of Modern HPC Cluster Architectures

• Multi-core processors are ubiquitous

• InfiniBand very popular in HPC clusters •

• Accelerators/Coprocessors becoming common in high-end systems

• Pushing the envelope for Exascale computing

Accelerators / Coprocessors high compute density, high performance/watt

>1 TFlop DP on a chip

High Performance Interconnects - InfiniBand <1usec latency, >100Gbps Bandwidth

Tianhe – 2 (1) Titan (2) Stampede (6) Tianhe – 1A (10)

6

Multi-core Processors

HPC Advisory Council Stanford Conference, Feb '14

Page 7: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• 207 IB Clusters (41%) in the November 2013 Top500 list

(http://www.top500.org)

• Installations in the Top 40 (19 systems):

HPC Advisory Council Stanford Conference, Feb '14

Large-scale InfiniBand Installations

462,462 cores (Stampede) at TACC (7th) 70,560 cores (Helios) at Japan/IFERC (24th)

147, 456 cores (Super MUC) in Germany (10th) 138,368 cores (Tera-100) at France/CEA (29th)

74,358 cores (Tsubame 2.5) at Japan/GSIC (11th) 60,000-cores, iDataPlex DX360M4 at Germany/Max-Planck (31st)

194,616 cores (Cascade) at PNNL (13th) 53,504 cores (PRIMERGY) at Australia/NCI (32nd)

110,400 cores (Pangea) at France/Total (14th) 77,520 cores (Conte) at Purdue University (33rd)

96,192 cores (Pleiades) at NASA/Ames (16th) 48,896 cores (MareNostrum) at Spain/BSC (34th)

73,584 cores (Spirit) at USA/Air Force (18th) 222,072 (PRIMERGY) at Japan/Kyushu (36th)

77,184 cores (Curie thin nodes) at France/CEA (20th) 78,660 cores (Lomonosov) in Russia (37th )

120, 640 cores (Nebulae) at China/NSCS (21st) 137,200 cores (Sunway Blue Light) in China 40th)

72,288 cores (Yellowstone) at NCAR (22nd) and many more!

7

Page 8: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 8

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory Memory

Logical shared memory

Shared Memory Model

SHMEM, DSM

Distributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems

– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• In this presentation, we concentrate on MPI first, then on

PGAS and Hybrid MPI+PGAS

Page 9: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Message Passing Library standardized by MPI Forum

– C and Fortran

• Goal: portable, efficient and flexible standard for writing

parallel applications

• Not IEEE or ISO standard, but widely considered “industry

standard” for HPC application

• Evolution of MPI

– MPI-1: 1994

– MPI-2: 1996

– MPI-3.0: 2008 – 2012, standardized before SC ‘12

– Next plans for MPI 3.1, 3.2, ….

9

MPI Overview and History

HPC Advisory Council Stanford Conference, Feb '14

Page 10: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Point-to-point Two-sided Communication

• Collective Communication

• One-sided Communication

• Job Startup

• Parallel I/O

Major MPI Features

10 HPC Advisory Council Stanford Conference, Feb '14

Page 11: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Exascale System Targets

Systems 2010 2018 Difference 2010 & 2018

System peak 2 PFlop/s 1 EFlop/s O(1,000)

Power 6 MW ~20 MW (goal)

System memory 0.3 PB 32 – 64 PB O(100)

Node performance 125 GF 1.2 or 15 TF O(10) – O(100)

Node memory BW 25 GB/s 2 – 4 TB/s O(100)

Node concurrency 12 O(1k) or O(10k) O(100) – O(1,000)

Total node interconnect BW 3.5 GB/s 200 – 400 GB/s (1:4 or 1:8 from memory BW)

O(100)

System size (nodes) 18,700 O(100,000) or O(1M) O(10) – O(100)

Total concurrency 225,000 O(billion) + [O(10) to O(100) for latency hiding]

O(10,000)

Storage capacity 15 PB 500 – 1000 PB (>10x system memory is min)

O(10) – O(100)

IO Rates 0.2 TB 60 TB/s O(100)

MTTI Days O(1 day) -O(10)

Courtesy: DOE Exascale Study and Prof. Jack Dongarra

11 HPC Advisory Council Stanford Conference, Feb '14

Page 12: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• DARPA Exascale Report – Peter Kogge, Editor and Lead

• Energy and Power Challenge

– Hard to solve power requirements for data movement

• Memory and Storage Challenge

– Hard to achieve high capacity and high data rate

• Concurrency and Locality Challenge

– Management of very large amount of concurrency (billion threads)

• Resiliency Challenge

– Low voltage devices (for low power) introduce more faults

HPC Advisory Council Stanford Conference, Feb '14 12

Basic Design Challenges for Exascale Systems

Page 13: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Power required for data movement operations is one of

the main challenges

• Non-blocking collectives

– Overlap computation and communication

• Much improved One-sided interface

– Reduce synchronization of sender/receiver

• Manage concurrency

– Improved interoperability with PGAS (e.g. UPC, Global Arrays,

OpenSHMEM)

• Resiliency

– New interface for detecting failures

HPC Advisory Council Stanford Conference, Feb '14 13

How does MPI Plan to Meet Exascale Challenges?

Page 14: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Major features

– Non-blocking Collectives

– Improved One-Sided (RMA) Model

– MPI Tools Interface

• Specification is available from: http://www.mpi-

forum.org/docs/mpi-3.0/mpi30-report.pdf

HPC Advisory Council Stanford Conference, Feb '14 14

Major New Features in MPI-3

Page 15: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Enables overlap of computation with communication

• Removes synchronization effects of collective operations

(exception of barrier)

• Non-blocking calls do not match blocking collective calls

– MPI implementation may use different algorithms for blocking and

non-blocking collectives

– Blocking collectives: optimized for latency

– Non-blocking collectives: optimized for overlap

• User must call collectives in same order on all ranks

• Progress rules are same as those for point-to-point

• Example new calls: MPI_Ibarrier, MPI_Iallreduce, …

HPC Advisory Council Stanford Conference, Feb '14 15

Non-blocking Collective Operations

Page 16: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 16

Improved One-sided (RMA) Model Process

0

Process

2

Process

1

Process

3

mem

mem

mem

mem

Process 0

Private

Window

Public

Window

Incoming RMA Ops

Synchronization

Local Memory Ops

Window

• New RMA proposal has major improvements

• Easy to express irregular communication pattern

• Better overlap of communication & computation

• MPI-2: public and private windows

– Synchronization of windows explicit

• MPI-2: works for non-cache coherent systems

• MPI-3: two types of windows

– Unified and Separate

– Unified window leverages hardware cache coherence

Page 17: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 17

MPI Tools Interface

• Extended tools support in MPI-3, beyond the PMPI interface

• Provide standardized interface (MPIT) to access MPI internal

information • Configuration and control information

• Eager limit, buffer sizes, . . .

• Performance information

• Time spent in blocking, memory usage, . . .

• Debugging information

• Packet counters, thresholds, . . .

• External tools can build on top of this standard interface

Page 18: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

18

Partitioned Global Address Space (PGAS) Models

HPC Advisory Council Stanford Conference, Feb '14

• Key features

- Simple shared memory abstractions

- Light weight one-sided communication

- Easier to express irregular communication

• Different approaches to PGAS

- Languages

• Unified Parallel C (UPC)

• Co-Array Fortran (CAF)

• X10

- Libraries

• OpenSHMEM

• Global Arrays

• Chapel

Page 19: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• UPC: a parallel extension to the C standard

• UPC Specifications and Standards: – Introduction to UPC and Language Specification, 1999

– UPC Language Specifications, v1.0, Feb 2001

– UPC Language Specifications, v1.1.1, Sep 2004

– UPC Language Specifications, v1.2, 2005

– UPC Language Specifications, v1.3, In Progress - Draft Available

• UPC Consortium – Academic Institutions: GWU, MTU, UCB, U. Florida, U. Houston, U. Maryland…

– Government Institutions: ARSC, IDA, LBNL, SNL, US DOE…

– Commercial Institutions: HP, Cray, Intrepid Technology, IBM, …

• Supported by several UPC compilers – Vendor-based commercial UPC compilers: HP UPC, Cray UPC, SGI UPC

– Open-source UPC compilers: Berkeley UPC, GCC UPC, Michigan Tech MuPC

• Aims for: high performance, coding efficiency, irregular applications, …

HPC Advisory Council Stanford Conference, Feb '14 19

Compiler-based: Unified Parallel C

Page 20: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

OpenSHMEM

• SHMEM implementations – Cray SHMEM, SGI SHMEM, Quadrics SHMEM, HP

SHMEM, GSHMEM

• Subtle differences in API, across versions – example:

SGI SHMEM Quadrics SHMEM Cray SHMEM

Initialization start_pes(0) shmem_init start_pes

Process ID _my_pe my_pe shmem_my_pe

• Made applications codes non-portable

• OpenSHMEM is an effort to address this:

“A new, open specification to consolidate the various extant SHMEM versions

into a widely accepted standard.” – OpenSHMEM Specification v1.0

by University of Houston and Oak Ridge National Lab

SGI SHMEM is the baseline

HPC Advisory Council Stanford Conference, Feb '14 20

Page 21: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Hierarchical architectures with multiple address spaces

• (MPI + PGAS) Model

– MPI across address spaces

– PGAS within an address space

• MPI is good at moving data between address spaces

• Within an address space, MPI can interoperate with other shared

memory programming models

• Applications can have kernels with different communication patterns

• Can benefit from different models

• Re-writing complete applications can be a huge effort

• Port critical kernels to the desired model instead

HPC Advisory Council Stanford Conference, Feb '14 21

MPI+PGAS for Exascale Architectures and Applications

Page 22: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Hybrid (MPI+PGAS) Programming

• Application sub-kernels can be re-written in MPI/PGAS based

on communication characteristics

• Benefits:

– Best of Distributed Computing Model

– Best of Shared Memory Computing Model

• Exascale Roadmap*:

– “Hybrid Programming is a practical way to

program exascale systems”

* The International Exascale Software Roadmap, Dongarra, J., Beckman, P. et al., Volume 25, Number 1, 2011, International Journal of High Performance Computer Applications, ISSN 1094-3420

Kernel 1 MPI

Kernel 2 MPI

Kernel 3 MPI

Kernel N MPI

HPC Application

Kernel 2 PGAS

Kernel N PGAS

HPC Advisory Council Stanford Conference, Feb '14 22

Page 23: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Designing Software Libraries for Multi-Petaflop and Exaflop Systems: Challenges

Programming Models MPI, PGAS (UPC, Global Arrays, OpenSHMEM),

CUDA, OpenACC, Cilk, Hadoop, MapReduce, etc.

Application Kernels/Applications

Networking Technologies (InfiniBand, 40/100GigE,

Aries, BlueGene)

Multi/Many-core Architectures

Point-to-point Communication (two-sided & one-sided)

Collective Communication

Synchronization & Locks

I/O & File Systems

Fault Tolerance

Communication Library or Runtime for Programming Models

23

Accelerators (NVIDIA and MIC)

Middleware

HPC Advisory Council Stanford Conference, Feb '14

Co-Design

Opportunities

and

Challenges

across Various

Layers

Performance

Scalability

Fault-

Resilience

Page 24: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided

and one-sided)

– Extremely minimum memory footprint

• Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

– Multiple end-points per node

• Support for efficient multi-threading

• Support for GPGPUs and Accelerators

• Scalable Collective communication – Offload

– Non-blocking

– Topology-aware

– Power-aware

• Fault-tolerance/resiliency

• QoS support for communication and I/O

• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, …)

Challenges in Designing (MPI+X) at Exascale

24 HPC Advisory Council Stanford Conference, Feb '14

Page 25: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• High Performance open-source MPI Library for InfiniBand, 10Gig/iWARP, and RDMA

over Converged Enhanced Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Available since 2002

– MAPICH2-X (MPI + PGAS), Available since 2012

– Support for GPGPUs and MIC

– Used by more than 2,100 organizations (HPC Centers, Industry and Universities) in 71

countries

– More than 200,000 downloads from OSU site directly

– Empowering many TOP500 clusters

• 7th ranked 462,462-core cluster (Stampede) at TACC

• 11th ranked 74,358-core cluster (Tsubame 2.5) at Tokyo Institute of Technology

• 16th ranked 96,192-core cluster (Pleiades) at NASA and many others

– Available with software stacks of many IB, HSE, and server vendors including

Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Partner in the U.S. NSF-TACC Stampede System

MVAPICH2/MVAPICH2-X Software

25 HPC Advisory Council Stanford Conference, Feb '14

Page 26: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided

and one-sided)

– Extremely minimum memory footprint

• Collective communication – Multicore-aware and Hardware-multicast-based

– Offload and Non-blocking

– Topology-aware

– Power-aware

• Support for GPGPUs

• Support for Intel MICs

• Fault-tolerance

• Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, …) with Unified Runtime

Overview of A Few Challenges being Addressed by MVAPICH2/MVAPICH2-X for Exascale

26 HPC Advisory Council Stanford Conference, Feb '14

Page 27: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 27

One-way Latency: MPI over IB with MVAPICH2

0.00

1.00

2.00

3.00

4.00

5.00

6.00Small Message Latency

Message Size (bytes)

Late

ncy

(u

s)

1.66

1.56

1.64

1.82

0.99 1.09

1.12

DDR, QDR - 2.4 GHz Quad-core (Westmere) Intel PCI Gen2 with IB switch FDR - 2.6 GHz Octa-core (SandyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 2.6 GHz Octa-core (SandyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

0.00

50.00

100.00

150.00

200.00

250.00Qlogic-DDRQlogic-QDRConnectX-DDRConnectX2-PCIe2-QDRConnectX3-PCIe3-FDRSandy-ConnectIB-DualFDRIvy-ConnectIB-DualFDR

Large Message Latency

Message Size (bytes)

Late

ncy

(u

s)

Page 28: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 28

Bandwidth: MPI over IB with MVAPICH2

0

2000

4000

6000

8000

10000

12000

14000Unidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/

sec)

Message Size (bytes)

3280

3385

1917 1706

6343

12485

12810

0

5000

10000

15000

20000

25000

30000Qlogic-DDRQlogic-QDRConnectX-DDRConnectX2-PCIe2-QDRConnectX3-PCIe3-FDRSandy-ConnectIB-DualFDRIvy-ConnectIB-DualFDR

Bidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/

sec)

Message Size (bytes)

3341 3704

4407

11643

6521

21025

24727

DDR, QDR - 2.4 GHz Quad-core (Westmere) Intel PCI Gen2 with IB switch FDR - 2.6 GHz Octa-core (SandyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 2.6 GHz Octa-core (SandyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

Page 29: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

0

0.2

0.4

0.6

0.8

1

0 1 2 4 8 16 32 64 128 256 512 1K

Late

ncy

(u

s)

Message Size (Bytes)

Latency

Intra-Socket Inter-Socket

MVAPICH2 Two-Sided Intra-Node Performance (Shared memory and Kernel-based Zero-copy Support (LiMIC and CMA))

29

Latest MVAPICH2 2.0b

Intel Ivy-bridge

0.18 us

0.45 us

0

2000

4000

6000

8000

10000

12000

14000

16000

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

Bandwidth (Inter-socket)

inter-Socket-CMA

inter-Socket-Shmem

inter-Socket-LiMIC

0

2000

4000

6000

8000

10000

12000

14000

16000

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

Bandwidth (Intra-socket)

intra-Socket-CMA

intra-Socket-Shmem

intra-Socket-LiMIC

14,250 MB/s 13,749 MB/s

HPC Advisory Council Stanford Conference, Feb '14

Page 30: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Memory usage for 32K processes with 8-cores per node can be 54 MB/process (for connections)

• NAMD performance improves when there is frequent communication to many peers

HPC Advisory Council Stanford Conference, Feb '14

eXtended Reliable Connection (XRC) and Hybrid Mode Memory Usage Performance on NAMD (1024 cores)

30

• Both UD and RC/XRC have benefits

• Hybrid for the best of both

• Available since MVAPICH2 1.7 as integrated interface

• Runtime Parameters: RC - default;

• UD - MV2_USE_ONLY_UD=1

• Hybrid - MV2_HYBRID_ENABLE_THRESHOLD=1

M. Koop, J. Sridhar and D. K. Panda, “Scalable MPI Design over InfiniBand using eXtended Reliable

Connection,” Cluster ‘08

0

100

200

300

400

500

1 4 16 64 256 1K 4K 16KMe

mo

ry (

MB

/pro

cess

)

Connections

MVAPICH2-RC

MVAPICH2-XRC

0

0.5

1

1.5

apoa1 er-gre f1atpase jac

No

rmal

ize

d T

ime

Dataset

MVAPICH2-RC MVAPICH2-XRC

0

2

4

6

128 256 512 1024

Tim

e (

us)

Number of Processes

UD Hybrid RC

26% 40% 30% 38%

Page 31: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided

and one-sided)

– Extremely minimum memory footprint

• Collective communication – Multicore-aware and Hardware-multicast-based

– Offload and Non-blocking

– Topology-aware

– Power-aware

• Support for GPGPUs

• Support for Intel MICs

• Fault-tolerance

• Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, …) with Unified Runtime

Overview of A Few Challenges being Addressed by MVAPICH2/MVAPICH2-X for Exascale

31 HPC Advisory Council Stanford Conference, Feb '14

Page 32: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 32

Hardware Multicast-aware MPI_Bcast on Stampede

05

10152025303540

2 8 32 128 512

Late

ncy

(u

s)

Message Size (Bytes)

Small Messages (102,400 Cores)

Default

Multicast

ConnectX-3-FDR (54 Gbps): 2.7 GHz Dual Octa-core (SandyBridge) Intel PCI Gen3 with Mellanox IB FDR switch

050

100150200250300350400450

2K 8K 32K 128K

Late

ncy

(u

s)

Message Size (Bytes)

Large Messages (102,400 Cores)

Default

Multicast

0

5

10

15

20

25

30

Late

ncy

(u

s)

Number of Nodes

16 Byte Message

Default

Multicast

0

50

100

150

200

Late

ncy

(u

s)

Number of Nodes

32 KByte Message

Default

Multicast

Page 33: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Shared-memory Aware Collectives

0

20

40

60

80

100

120

140

160

0 4 8 16 32 64 128 256 512

La

ten

cy (

us

)

Message Size (Bytes)

MPI_Reduce (4096 cores)

Original

Shared-memory

HPC Advisory Council Stanford Conference, Feb '14

0500

10001500200025003000350040004500

0 8 32 128 512 2K 8K

La

ten

cy (

us

)

Message Size (Bytes)

MPI_ Allreduce (4096 cores)

Original

Shared-memory

33

MV2_USE_SHMEM_REDUCE=0/1 MV2_USE_SHMEM_ALLREDUCE=0/1

020406080

100120140

128 256 512 1024

Late

ncy

(u

s)

Number of Processes

Pt2Pt Barrier Shared-Mem Barrier

• MVAPICH2 Reduce/Allreduce with 4K cores on TACC Ranger (AMD Barcelona, SDR IB)

• MVAPICH2 Barrier with 1K Intel

Westmere cores , QDR IB

MV2_USE_SHMEM_BARRIER=0/1

Page 34: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

0

1

2

3

4

5

512 600 720 800Ap

plic

atio

n R

un

-Tim

e

(s)

Data Size

0

5

10

15

64 128 256 512Ru

n-T

ime

(s)

Number of Processes

PCG-Default Modified-PCG-Offload

Application benefits with Non-Blocking Collectives based on CX-3 Collective Offload

34

Modified P3DFFT with Offload-Alltoall does up to 17% better than default version (128 Processes)

K. Kandalla, et. al.. High-Performance and Scalable Non-Blocking

All-to-All with Collective Offload on InfiniBand Clusters: A Study

with Parallel 3D FFT, ISC 2011

HPC Advisory Council Stanford Conference, Feb '14

17%

00.20.40.60.8

11.2

10 20 30 40 50 60 70

No

rmal

ize

d

Pe

rfo

rman

ce

HPL-Offload HPL-1ring HPL-Host

HPL Problem Size (N) as % of Total Memory

4.5%

Modified HPL with Offload-Bcast does up to 4.5% better than default version (512 Processes)

Modified Pre-Conjugate Gradient Solver with Offload-Allreduce does up to 21.8% better than default version

K. Kandalla, et. al, Designing Non-blocking Broadcast with Collective

Offload on InfiniBand Clusters: A Case Study with HPL, HotI 2011

K. Kandalla, et. al., Designing Non-blocking Allreduce with Collective Offload on InfiniBand Clusters: A Case Study with Conjugate Gradient Solvers, IPDPS ’12

21.8%

Can Network-Offload based Non-Blocking Neighborhood MPI Collectives Improve Communication Overheads of Irregular Graph Algorithms? K. Kandalla, A. Buluc, H. Subramoni, K. Tomko, J. Vienne, L. Oliker, and D. K. Panda, IWPAPS’ 12

Page 35: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 35

Network-Topology-Aware Placement of Processes Can we design a highly scalable network topology detection service for IB? How do we design the MPI communication library in a network-topology-aware manner to efficiently leverage the topology information generated by our service? What are the potential benefits of using a network-topology-aware MPI library on the performance of parallel scientific applications?

Overall performance and Split up of physical communication for MILC on Ranger

Performance for varying system sizes Default for 2048 core run Topo-Aware for 2048 core run

15%

H. Subramoni, S. Potluri, K. Kandalla, B. Barth, J. Vienne, J. Keasler, K. Tomko, K. Schulz, A. Moody, and D. K. Panda, Design of a Scalable

InfiniBand Topology Service to Enable Network-Topology-Aware Placement of Processes, SC'12 . BEST Paper and BEST STUDENT Paper

Finalist

• Reduce network topology discovery time from O(N2hosts) to O(Nhosts)

• 15% improvement in MILC execution time @ 2048 cores

• 15% improvement in Hypre execution time @ 1024 cores

Page 36: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

36

Power and Energy Savings with Power-Aware Collectives Performance and Power Comparison : MPI_Alltoall with 64 processes on 8 nodes

K. Kandalla, E. P. Mancini, S. Sur and D. K. Panda, “Designing Power Aware Collective Communication Algorithms for

Infiniband Clusters”, ICPP ‘10

CPMD Application Performance and Power Savings

5% 32%

HPC Advisory Council Stanford Conference, Feb '14

Page 37: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided

and one-sided)

– Extremely minimum memory footprint

• Collective communication – Multicore-aware and Hardware-multicast-based

– Offload and Non-blocking

– Topology-aware

– Power-aware

• Support for GPGPUs

• Support for Intel MICs

• Fault-tolerance

• Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, …) with Unified Runtime

Overview of A Few Challenges being Addressed by MVAPICH2/MVAPICH2-X for Exascale

37 HPC Advisory Council Stanford Conference, Feb '14

Page 38: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Collaboration between Mellanox and

NVIDIA to converge on one memory

registration technique

• Both devices register a common

host buffer

– GPU copies data to this buffer, and the network

adapter can directly read from this buffer (or

vice-versa)

• Note that GPU-Direct does not allow you to

bypass host memory

• Latest GPUDirect RDMA achieves this

HPC Advisory Council Stanford Conference, Feb '14 38

GPU-Direct

Page 39: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

PCIe

GPU

CPU

NIC

Switch

At Sender: cudaMemcpy(sbuf, sdev, . . .);

MPI_Send(sbuf, size, . . .);

At Receiver: MPI_Recv(rbuf, size, . . .);

cudaMemcpy(rdev, rbuf, . . .);

Sample Code - Without MPI integration

•Naïve implementation with standard MPI and CUDA

• High Productivity and Poor Performance

39 HPC Advisory Council Stanford Conference, Feb '14

Page 40: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

PCIe

GPU

CPU

NIC

Switch

At Sender: for (j = 0; j < pipeline_len; j++)

cudaMemcpyAsync(sbuf + j * blk, sdev + j * blksz,. . .);

for (j = 0; j < pipeline_len; j++) {

while (result != cudaSucess) {

result = cudaStreamQuery(…);

if(j > 0) MPI_Test(…);

}

MPI_Isend(sbuf + j * block_sz, blksz . . .);

}

MPI_Waitall();

Sample Code – User Optimized Code

• Pipelining at user level with non-blocking MPI and CUDA interfaces

• Code at Sender side (and repeated at Receiver side)

• User-level copying may not match with internal MPI design

• High Performance and Poor Productivity

HPC Advisory Council Stanford Conference, Feb '14 40

Page 41: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

At Sender:

At Receiver:

MPI_Recv(r_device, size, …);

inside MVAPICH2

Sample Code – MVAPICH2-GPU

• MVAPICH2-GPU: standard MPI interfaces used

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

• High Performance and High Productivity

41 HPC Advisory Council Stanford Conference, Feb '14

MPI_Send(s_device, size, …);

Page 42: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

MPI-Level Two-sided Communication

• 45% improvement compared with a naïve user-level implementation

(Memcpy+Send), for 4MB messages

• 24% improvement compared with an advanced user-level implementation

(MemcpyAsync+Isend), for 4MB messages

0

500

1000

1500

2000

2500

3000

32K 64K 128K 256K 512K 1M 2M 4M

Tim

e (

us)

Message Size (bytes)

Memcpy+Send

MemcpyAsync+Isend

MVAPICH2-GPU

H. Wang, S. Potluri, M. Luo, A. Singh, S. Sur and D. K. Panda, MVAPICH2-GPU: Optimized GPU to GPU Communication for InfiniBand Clusters, ISC ‘11

42 HPC Advisory Council Stanford Conference, Feb '14

Better

Page 43: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

MVAPICH2 1.8, 1.9, 2.0a and 2.0b Releases

• Support for MPI communication from NVIDIA GPU device memory

• High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU)

• High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)

• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node

• Optimized and tuned collectives for GPU device buffers

• MPI datatype support for point-to-point and collective communication from GPU device buffers

43 HPC Advisory Council Stanford Conference, Feb '14

Page 44: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 44

Applications-Level Evaluation

• LBM-CUDA (Courtesy: Carlos Rosale, TACC) • Lattice Boltzmann Method for multiphase flows with large density ratios •one process/GPU per node, 16 nodes

• AWP-ODC (Courtesy: Yifeng Cui, SDSC) • a seismic modeling code, Gordon Bell Prize finalist at SC 2010 • 128x256x512 data grid per process, 8 nodes

• Oakley cluster at OSC: two hex-core Intel Westmere processors, two NVIDIA Tesla M2070, one Mellanox IB QDR MT26428 adapter and 48 GB of main memory

0

20

40

60

80

100

120

140

256*256*256 512*256*256 512*512*256 512*512*512

Ste

p T

ime

(S)

Domain Size X*Y*Z

MPI MPI-GPU

13.7% 12.0%

11.8%

9.4%

0102030405060708090

1 GPU/Proc per Node 2 GPUs/Procs per NodeTota

l Ex

ecu

tio

n T

ime

(se

c)

Configuration

AWP-ODC

MPI MPI-GPU

11.1% 7.9%

LBM-CUDA

Page 45: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Fastest possible communication

between GPU and other PCI-E

devices

• Network adapter can directly read

data from GPU device memory

• Avoids copies through the host

• Allows for better asynchronous

communication

HPC Advisory Council Stanford Conference, Feb '14 45

GPU-Direct RDMA with CUDA 5/6

InfiniBand

GPU

GPU Memory

CPU

Chip set

System Memory

MVAPICH2 with GDR support under development

MVAPICH2-GDR 2.0b is available for users

Page 46: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Peer2Peer (P2P) bottlenecks on

Sandy Bridge

• Design of MVAPICH2

– Hybrid design

– Takes advantage of GPU-Direct-

RDMA for writes to GPU

– Uses host-based buffered design in

current MVAPICH2 for reads

– Works around the bottlenecks

transparently

46

Initial Design of OSU-MVAPICH2 with GPU-Direct-RDMA

HPC Advisory Council Stanford Conference, Feb '14

IB Adapter

System Memory

GPU Memory

GPU

CPU

Chipset

P2P write: 5.2 GB/s

P2P read: < 1.0 GB/s

SNB E5-2670

Page 47: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

47

Performance of MVAPICH2 with GPU-Direct-RDMA: Latency

GPU-GPU Internode MPI Latency

HPC Advisory Council Stanford Conference, Feb '14

0

100

200

300

400

500

600

700

800

8K 32K 128K 512K 2M

1-Rail

2-Rail

1-Rail-GDR

2-Rail-GDR

Large Message Latency

Message Size (Bytes)

Late

ncy

(u

s)

Based on MVAPICH2-2.0b Intel Ivy Bridge (E5-2680 v2) node with 20 cores

NVIDIA Tesla K40c GPU, Mellanox Connect-IB Dual-FDR HCA CUDA 5.5, Mellanox OFED 2.0 with GPU-Direct-RDMA Patch

10 %

0

5

10

15

20

25

1 4 16 64 256 1K 4K

1-Rail

2-Rail

1-Rail-GDR

2-Rail-GDR

Small Message Latency

Message Size (Bytes)

Late

ncy

(u

s)

67 %

5.49 usec

Page 48: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

48

Performance of MVAPICH2 with GPU-Direct-RDMA: Bandwidth GPU-GPU Internode MPI Uni-Directional Bandwidth

HPC Advisory Council Stanford Conference, Feb '14

0

200

400

600

800

1000

1200

1400

1600

1800

2000

1 4 16 64 256 1K 4K

1-Rail

2-Rail

1-Rail-GDR

2-Rail-GDR

Small Message Bandwidth

Message Size (Bytes)

Ban

dw

idth

(M

B/s

)

0

2000

4000

6000

8000

10000

12000

8K 32K 128K 512K 2M

1-Rail

2-Rail

1-Rail-GDR

2-Rail-GDR

Large Message Bandwidth

Message Size (Bytes)

Ban

dw

idth

(M

B/s

)

Based on MVAPICH2-2.0b Intel Ivy Bridge (E5-2680 v2) node with 20 cores

NVIDIA Tesla K40c GPU, Mellanox Connect-IB Dual-FDR HCA CUDA 5.5, Mellanox OFED 2.0 with GPU-Direct-RDMA Patch

5x

9.8 GB/s

Page 49: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

49

Performance of MVAPICH2 with GPU-Direct-RDMA: Bi-Bandwidth

Based on MVAPICH2-2.0b Intel Ivy Bridge (E5-2680 v2) node with 20 cores

NVIDIA Tesla K40c GPU, Mellanox Connect-IB Dual-FDR HCA CUDA 5.5, Mellanox OFED 2.0 with GPU-Direct-RDMA Patch

GPU-GPU Internode MPI Bi-directional Bandwidth

HPC Advisory Council Stanford Conference, Feb '14

0

200

400

600

800

1000

1200

1400

1600

1800

2000

1 4 16 64 256 1K 4K

1-Rail

2-Rail

1-Rail-GDR

2-Rail-GDR

Small Message Bi-Bandwidth

Message Size (Bytes)

Bi-

Ban

dw

idth

(M

B/s

)

0

5000

10000

15000

20000

25000

8K 32K 128K 512K 2M

1-Rail

2-Rail

1-Rail-GDR

2-Rail-GDR

Large Message Bi-Bandwidth

Message Size (Bytes)

Bi-

Ban

dw

idth

(M

B/s

)

4.3x

19 %

19 GB/s

Page 50: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

50

AWP-ODC with MVAPICH2-GPU

Based on MVAPICH2-2.0b, CUDA 5.5, Mellanox OFED 2.0 with GPU-Direct-RDMA Patch Two nodes, one GPU/node, one Process/GPU

0

50

100

150

200

250

300

350

Platform A Platform B

MV2 MV2-GDR

11.6%

15.4%

Platform A: Intel Sandy Bridge + NVIDIA Tesla K20 + Mellanox ConnectX-3

Platform B: Intel Ivy Bridge + NVIDIA Tesla K40 + Mellanox Connect-IB

• A widely-used seismic modeling application, Gordon Bell Finalist at SC 2010

• An initial version using MPI + CUDA for GPU clusters

• Takes advantage of CUDA-aware MPI, two nodes, 1 GPU/Node and 64x32x32 problem

• GPUDirect-RDMA delivers better performance with newer architecture

0

50

100

150

200

250

300

350

Platform A Platfrom AGDR

Platform B Platfrom BGDR

Computation Communication

21% 30%

HPC Advisory Council Stanford Conference, Feb '14

Page 51: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

51

Performance Benefits using MPI-3 RMA

Based on MVAPICH2-2.0b + Extensions Intel Sandy Bridge (E5-2670) node with 16 cores

NVIDIA Tesla K40c GPU, Mellanox Connect-IB Dual-FDR HCA CUDA 5.5, Mellanox OFED 2.1 with GPU-Direct-RDMA Plugin

MPI-3 RMA provides flexible synchronization – reduces overheads for GPU-GPU Communication

HPC Advisory Council Stanford Conference, Feb '14

0

2

4

6

8

10

12

14

16

18

20

1 4 16 64 256 1K 4K

Send-Recv

Put+ActiveSync

Put+Flush

Small Message Latency

Message Size (Bytes)

Late

ncy

(u

sec)

0

100000

200000

300000

400000

500000

600000

700000

800000

900000

1 4 16 64 256 1K 4K

Send-Recv

Put+ActiveSync

Small Message Rate

Message Size (Bytes)

Me

ssag

e R

ate

(M

sgs/

sec)

Page 52: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

How can I get Started with GDR Experimentation?

• MVAPICH2-2.0b with GDR support can be downloaded from

https://mvapich.cse.ohio-state.edu/download/mvapich2gdr/

• System software requirements

– Mellanox OFED 2.1

– NVIDIA Driver 331.20 or later

– NVIDIA CUDA Toolkit 5.5

– Plugin for GPUDirect RDMA

(http://www.mellanox.com/page/products_dyn?product_family=116)

• Has optimized designs for point-to-point communication using GDR

• Work under progress for optimizing collective and one-sided communication

• Contact MVAPICH help list with any questions related to the package mvapich-

[email protected]

52 HPC Advisory Council Stanford Conference, Feb '14

Page 53: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided

and one-sided)

– Extremely minimum memory footprint

• Collective communication – Multicore-aware and Hardware-multicast-based

– Offload and Non-blocking

– Topology-aware

– Power-aware

• Support for GPGPUs

• Support for Intel MICs

• Fault-tolerance

• Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, …) with Unified Runtime

Overview of A Few Challenges being Addressed by MVAPICH2/MVAPICH2-X for Exascale

53 HPC Advisory Council Stanford Conference, Feb '14

Page 54: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

MPI Applications on MIC Clusters

Xeon Xeon Phi

Multi-core Centric

Many-core Centric

MPI Program

MPI Program

Offloaded Computation

MPI Program

MPI Program

MPI Program

Host-only

Offload (/reverse Offload)

Symmetric

Coprocessor-only

• Flexibility in launching MPI jobs on clusters with Xeon Phi

54 HPC Advisory Council Stanford Conference, Feb '14

Page 55: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Data Movement on Intel Xeon Phi Clusters

CPU CPU QPI

MIC

PC

Ie

MIC

MIC

CPU

MIC

IB

Node 0 Node 1 1. Intra-Socket

2. Inter-Socket

3. Inter-Node

4. Intra-MIC

5. Intra-Socket MIC-MIC

6. Inter-Socket MIC-MIC

7. Inter-Node MIC-MIC

8. Intra-Socket MIC-Host

10. Inter-Node MIC-Host

9. Inter-Socket MIC-Host

MPI Process

55

11. Inter-Node MIC-MIC with IB adapter on remote socket

and more . . .

• Critical for runtimes to optimize data movement, hiding the complexity

• Connected as PCIe devices – Flexibility but Complexity

HPC Advisory Council Stanford Conference, Feb '14

Page 56: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

56

MVAPICH2-MIC Design for Clusters with IB and MIC

• Offload Mode

• Intranode Communication

• Coprocessor-only Mode

• Symmetric Mode

• Internode Communication

• Coprocessors-only

• Symmetric Mode

• Multi-MIC Node Configurations

HPC Advisory Council Stanford Conference, Feb '14

Page 57: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Direct-IB Communication

57

IB Write to Xeon Phi (P2P Write)

IB Read from Xeon Phi (P2P Read)

CPU

Xeon Phi

PCIe

QPI

E5-2680 v2 E5-2670

5280 MB/s (83%)

IB FDR HCA

MT4099

6396 MB/s (100%)

962 MB/s (15%) 3421 MB/s (54%)

1075 MB/s (17%) 1179 MB/s (19%)

370 MB/s (6%) 247 MB/s (4%)

(SandyBridge) (IvyBridge)

• IB-verbs based communication

from Xeon Phi

• No porting effort for MPI libraries

with IB support

Peak IB FDR Bandwidth: 6397 MB/s

Intra-socket

IB Write to Xeon Phi (P2P Write)

IB Read from Xeon Phi (P2P Read) Inter-socket

• Data movement between IB HCA

and Xeon Phi: implemented as

peer-to-peer transfers

HPC Advisory Council Stanford Conference, Feb '14

Page 58: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

MIC-Remote-MIC P2P Communication

58 HPC Advisory Council Stanford Conference, Feb '14

Bandwidth

Bette

r Bet

ter

Bet

ter

Latency (Large Messages)

0

1000

2000

3000

4000

5000

8K 32K 128K 512K 2M

La

ten

cy (

use

c)

Message Size (Bytes)

0

2000

4000

6000

1 16 256 4K 64K 1M

Ba

nd

wid

th (

MB

/sec

)

Message Size (Bytes)

5236

Intra-socket P2P

Inter-socket P2P

0200040006000800010000120001400016000

8K 32K 128K 512K 2M

La

ten

cy (

use

c)

Message Size (Bytes)

Latency (Large Messages)

0

2000

4000

6000

1 16 256 4K 64K 1M

Ba

nd

wid

th (

MB

/sec

)

Message Size (Bytes)

Bette

r 5594

Bandwidth

Page 59: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

InterNode – Symmetric Mode - P3DFFT Performance

59 HPC Advisory Council Stanford Conference, Feb '14

• To explore different threading levels for host and Xeon Phi

3DStencil Comm. Kernel

P3DFFT – 512x512x4K

0

2

4

6

8

10

4+4 8+8

Tim

e p

er S

tep

(se

c)

Processes on Host+MIC (32 Nodes)

0

5

10

15

20

2+2 2+8

Co

mm

per

Ste

p (

sec)

Processes on Host-MIC - 8 Threads/Proc

(8 Nodes)

50.2%

MV2-INTRA-OPT MV2-MIC

16.2%

Page 60: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

60

Optimized MPI Collectives for MIC Clusters (Broadcast)

HPC Advisory Council Stanford Conference, Feb '14

K. Kandalla , A. Venkatesh, K. Hamidouche, S. Potluri, D. Bureddy and D. K. Panda, "Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters; HotI’13

0

50

100

150

200

250

1 4 16 64 256 1K 4K

Late

ncy

(u

secs

)

Message Size (Bytes)

64-Node-Bcast (16H + 60M) Small Message Latency

MV2-MIC

MV2-MIC-Opt

0

2000

4000

6000

8000

10000

12000

14000

8K 16K 32K 64K 128K 256K 512K 1M

Late

ncy

(u

secs

)

Message Size (Bytes)

64-Node-Bcast (16H + 60M) Large Message Latency

MV2-MIC

MV2-MIC-Opt

0

100

200

300

400

500

4 Nodes (16H + 16 M) 8 Nodes (8H + 8 M)Exe

cuti

on

Tim

e (

secs

)

Configuration

Windjammer Performance MV2-MIC

MV2-MIC-Opt

• Up to 50 - 80% improvement in MPI_Bcast latency with as many as 4864 MPI processes (Stampede)

• 12% and 16% improvement in the performance of Windjammer with 128 processes

Page 61: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

61

Optimized MPI Collectives for MIC Clusters (Allgather & Alltoall)

HPC Advisory Council Stanford Conference, Feb '14

A. Venkatesh, S. Potluri, R. Rajachandrasekar, M. Luo, K. Hamidouche and D. K. Panda - High Performance Alltoall and Allgather designs for InfiniBand MIC Clusters; IPDPS’14 (accepted)

0

5000

10000

15000

20000

25000

1 2 4 8 16 32 64 128 256 512 1K

Late

ncy

(u

secs

)

Message Size (Bytes)

32-Node-Allgather (16H + 16 M) Small Message Latency

MV2-MIC

MV2-MIC-Opt

0

500

1000

1500

8K 16K 32K 64K 128K 256K 512K 1M

Late

ncy

(u

secs

) x 1

00

0

Message Size (Bytes)

32-Node-Allgather (8H + 8 M) Large Message Latency

MV2-MIC

MV2-MIC-Opt

0

200

400

600

800

4K 8K 16K 32K 64K 128K 256K 512K

Late

ncy

(u

secs

) x 1

00

0

Message Size (Bytes)

32-Node-Alltoall (8H + 8 M) Large Message Latency

MV2-MIC

MV2-MIC-Opt

0

10

20

30

40

50

MV2-MIC-Opt MV2-MIC

Exe

cuti

on

Tim

e (

secs

)

32 Nodes (8H + 8M), Size = 2K*2K*1K

P3DFFT Performance

Communication

Computation

76%

58%

55%

Page 62: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided

and one-sided)

– Extremely minimum memory footprint

• Collective communication – Multicore-aware and Hardware-multicast-based

– Offload and Non-blocking

– Topology-aware

– Power-aware

• Support for GPGPUs

• Support for Intel MICs

• Fault-tolerance

• Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, …) with Unified Runtime

Overview of A Few Challenges being Addressed by MVAPICH2/MVAPICH2-X for Exascale

62 HPC Advisory Council Stanford Conference, Feb '14

Page 63: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Component failures are common in large-scale clusters

• Imposes need on reliability and fault tolerance

• Multiple challenges:

– Checkpoint-Restart vs. Process Migration

– Benefits of Scalable Checkpoint Restart (SCR) Support

• Application guided

• Application transparent

HPC Advisory Council Stanford Conference, Feb '14 63

Fault Tolerance

Page 64: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 64

Checkpoint-Restart vs. Process Migration Low Overhead Failure Prediction with IPMI

X. Ouyang, R. Rajachandrasekar, X. Besseron, D. K. Panda, High Performance Pipelined Process Migration with RDMA,

CCGrid 2011

X. Ouyang, S. Marcarelli, R. Rajachandrasekar and D. K. Panda, RDMA-Based Job Migration Framework for MPI over

InfiniBand, Cluster 2010

10.7X

2.3X

4.3X

1

Time to migrate 8 MPI processes

• Job-wide Checkpoint/Restart is not scalable • Job-pause and Process Migration framework can deliver pro-active fault-tolerance • Also allows for cluster-wide load-balancing by means of job compaction

0

5000

10000

15000

20000

25000

30000

Migrationw/o RDMA

CR (ext3) CR (PVFS)

Exe

cu

tio

n T

ime

(se

co

nd

s)

Job Stall

Checkpoint (Migration)

Resume

Restart

2.03x

4.49x

LU Class C Benchmark (64 Processes)

Page 65: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Multi-Level Checkpointing with ScalableCR (SCR)

65 HPC Advisory Council Stanford Conference, Feb '14

Ch

eckp

oin

t C

ost

an

d R

esili

ency

Low

High

• LLNL’s Scalable Checkpoint/Restart

library

• Can be used for application guided and

application transparent checkpointing

• Effective utilization of storage hierarchy

– Local: Store checkpoint data on node’s local

storage, e.g. local disk, ramdisk

– Partner: Write to local storage and on a

partner node

– XOR: Write file to local storage and small sets

of nodes collectively compute and store parity

redundancy data (RAID-5)

– Stable Storage: Write to parallel file system

Page 66: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Application-guided Multi-Level Checkpointing

66 HPC Advisory Council Stanford Conference, Feb '14

0

20

40

60

80

100

PFS MVAPICH2+SCR(Local)

MVAPICH2+SCR(Partner)

MVAPICH2+SCR(XOR)

Ch

eckp

oin

t W

riti

ng

Tim

e (s

)

Representative SCR-Enabled Application

• Checkpoint writing phase times of representative SCR-enabled MPI application

• 512 MPI processes (8 procs/node)

• Approx. 51 GB checkpoints

Page 67: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Transparent Multi-Level Checkpointing

67 HPC Advisory Council Stanford Conference, Feb '14

• ENZO Cosmology application – Radiation Transport workload

• Using MVAPICH2’s CR protocol instead of the application’s in-built CR mechanism

• 512 MPI processes running on the Stampede Supercomputer@TACC

• Approx. 12.8 GB checkpoints: ~2.5X reduction in checkpoint I/O costs

Page 68: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided

and one-sided)

– Extremely minimum memory footprint

• Collective communication – Multicore-aware and Hardware-multicast-based

– Offload and Non-blocking

– Topology-aware

– Power-aware

• Support for GPGPUs

• Support for Intel MICs

• Fault-tolerance

• Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, …) with Unified Runtime

Overview of A Few Challenges being Addressed by MVAPICH2/MVAPICH2-X for Exascale

68 HPC Advisory Council Stanford Conference, Feb '14

Page 69: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Current approaches for Hybrid Programming

•Need more network resources •Might lead to deadlock! •Poor performance

• Layering one programming model over another

• Separate runtime for each programming model

Hybrid (PGAS+MPI) Application

InfiniBand Network

PGAS

callsMPI calls

PGAS

Runtime

MPI Runtime

HPC Advisory Council Stanford Conference, Feb '14 69

Hybrid (PGAS+MPI) Application

InfiniBand Network

PGAS

calls MPI calls

PGAS

MPI Runtime

•Poor performance due to semantics mismatch

Page 70: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Our Approach: Unified Communication Runtime

•Optimal network resource usage

•No deadlock because of single runtime

•Better performance

Hybrid (PGAS+MPI) Application

InfiniBand Network

PGAS

callsMPI calls

PGAS

Runtime

MPI Runtime

Hybrid (OpenSHMEM+MPI) Application

InfiniBand Network

Unified Communication Runtime

PGAS

InterfaceMPI

Interface

PGAS

callsMPI calls

HPC Advisory Council Stanford Conference, Feb '14 70

Page 71: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

MVAPICH2-X for Hybrid MPI + PGAS Applications

HPC Advisory Council Stanford Conference, Feb '14 71

MPI Applications, OpenSHMEM Applications, UPC

Applications, Hybrid (MPI + PGAS) Applications

Unified MVAPICH2-X Runtime

InfiniBand, RoCE, iWARP

OpenSHMEM Calls MPI Calls UPC Calls

• Unified communication runtime for MPI, UPC, OpenSHMEM available with

MVAPICH2-X 1.9 onwards!

– http://mvapich.cse.ohio-state.edu

• Feature Highlights

– Supports MPI(+OpenMP), OpenSHMEM, UPC, MPI(+OpenMP) + OpenSHMEM,

MPI(+OpenMP) + UPC

– MPI-3 compliant, OpenSHMEM v1.0 standard compliant, UPC v1.2 standard

compliant

– Scalable Inter-node and Intra-node communication – point-to-point and collectives

Page 72: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 72

Micro-benchmark Level Performance (OpenSHMEM) Reduce (256 processes) Broadcast (256 processes)

Collect (256 processes) Barrier

1

10

100

1000

10000

100000

1000000

10000000

128 256

No. of Processes

1

10

100

1000

10000

100000

1000000

4 16 64 256 1K 4K 16K 64K 256K

Tim

e (

us)

Message Size

1

10

100

1000

10000

100000

1000000

10000000

4 16 64 256 1K 4K 16K 64K 256K

Message Size

1

10

100

1000

10000

100000

1000000

10000000

4 16 64 256 1K 4K 16K 64K 256K

Tim

e (

us)

Message Size

OpenSHMEM-GASNet (Linear)OpenSHMEM-GASNet (Tree)OpenSHMEM-OSU

Page 73: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 73

Hybrid MPI+OpenSHMEM Graph500 Design Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,

International Supercomputing Conference (ISC’13), June 2013

0

1

2

3

4

5

6

7

8

9

26 27 28 29

Bill

ion

s o

f Tr

ave

rse

d E

dge

s P

er

Seco

nd

(TE

PS)

Scale

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid(MPI+OpenSHMEM)

0

1

2

3

4

5

6

7

8

9

1024 2048 4096 8192

Bill

ion

s o

f Tr

ave

rse

d E

dge

s P

er

Seco

nd

(TE

PS)

# of Processes

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid(MPI+OpenSHMEM)

Strong Scalability

Weak Scalability

J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance

Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012

0

5

10

15

20

25

30

35

4K 8K 16K

Tim

e (

s)

No. of Processes

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid (MPI+OpenSHMEM)13X

7.6X

• Performance of Hybrid (MPI+OpenSHMEM) Graph500 Design

• 8,192 processes - 2.4X improvement over MPI-CSR

- 7.6X improvement over MPI-Simple

• 16,384 processes - 1.5X improvement over MPI-CSR

- 13X improvement over MPI-Simple

Page 74: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

74

Hybrid MPI+OpenSHMEM Sort Application

Execution Time

Strong Scalability

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application

• Execution Time

- 4TB Input size at 4,096 cores: MPI – 2408 seconds, Hybrid: 1172 seconds

- 51% improvement over MPI-based design

• Strong Scalability (configuration: constant input size of 500GB)

- At 4,096 cores: MPI – 0.16 TB/min, Hybrid – 0.36 TB/min

- 55% improvement over MPI based design

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

512 1,024 2,048 4,096

Sort

Rat

e (

TB/m

in)

No. of Processes

MPI

Hybrid

0

500

1000

1500

2000

2500

3000

500GB-512 1TB-1K 2TB-2K 4TB-4K

Tim

e (

seco

nd

s)

Input Data - No. of Processes

MPI

Hybrid

51% 55%

HPC Advisory Council Stanford Conference, Feb '14

Page 75: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14 75

UPC Collective Performance Broadcast (2048 processes) Scatter (2048 processes)

Gather (2048 processes) Exchange (2048 processes)

0

2000

4000

6000

8000

10000

12000

140006

4

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

Tim

e (

us)

Message Size

UPC-GASNet

UPC-OSU

0

50

100

150

200

250

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

Tim

e (

ms)

Message Size

UPC-GASNet

UPC-OSU

0

50

100

150

200

250

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

Tim

e (

ms)

Message Size

UPC-GASNet

UPC-OSU 2X

0

500

1000

1500

2000

2500

3000

3500

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

Tim

e (

ms)

Message Size

UPC-GASNet

UPC-OSU

25X

2X

35%

Page 76: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

Hybrid MPI+UPC NAS-FT

• Modified NAS FT UPC all-to-all pattern using MPI_Alltoall

• Truly hybrid program

• For FT (Class C, 128 processes)

• 34% improvement over UPC-GASNet

• 30% improvement over UPC-OSU

76 HPC Advisory Council Stanford Conference, Feb '14

0

5

10

15

20

25

30

35

B-64 C-64 B-128 C-128

Tim

e (

s)

NAS Problem Size – System Size

UPC-GASNet

UPC-OSU

Hybrid-OSU

34%

J. Jose, M. Luo, S. Sur and D. K. Panda, Unifying UPC and MPI Runtimes: Experience with MVAPICH, Fourth

Conference on Partitioned Global Address Space Programming Model (PGAS ’10), October 2010

Page 77: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Performance and Memory scalability toward 500K-1M cores – Dynamically Connected Transport (DCT) service with Connect-IB

• Enhanced Optimization for GPGPU and Coprocessor Support – Extending the GPGPU support (GPU-Direct RDMA) with CUDA 6.0 and Beyond

– Support for Intel MIC (Knight Landing)

• Taking advantage of Collective Offload framework – Including support for non-blocking collectives (MPI 3.0)

• RMA support (as in MPI 3.0)

• Extended topology-aware collectives

• Power-aware collectives

• Support for MPI Tools Interface (as in MPI 3.0)

• Checkpoint-Restart and migration support with in-memory checkpointing

• Hybrid MPI+PGAS programming support with GPGPUs and Accelertors

MVAPICH2/MVPICH2-X – Plans for Exascale

77 HPC Advisory Council Stanford Conference, Feb '14

Page 78: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• Big Data: Hadoop and Memcached

– (Feb 4th, 13:10-14:00pm)

78

More Details on Accelerating Big Data will be covered ..

HPC Advisory Council Stanford Conference, Feb '14

Page 79: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

• InfiniBand with RDMA feature is gaining momentum in HPC

systems with best performance and greater usage

• As the HPC community moves to Exascale, new solutions are

needed in the MPI and Hybrid MPI+PGAS stacks for supporting

GPUs and Accelerators

• Demonstrated how such solutions can be designed with

MVAPICH2/MVAPICH2-X and their performance benefits

• Such designs will allow application scientists and engineers to

take advantage of upcoming exascale systems

79

Concluding Remarks

HPC Advisory Council Stanford Conference, Feb '14

Page 80: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14

Funding Acknowledgments

Funding Support by

Equipment Support by

80

Page 81: Programming Models for Exascale Systems...– Extremely minimum memory footprint • Balancing intra-node and inter-node communication for next generation multi-core (128-1024 cores/node)

HPC Advisory Council Stanford Conference, Feb '14

Personnel Acknowledgments

Current Students

– S. Chakraborthy (Ph.D.)

– N. Islam (Ph.D.)

– J. Jose (Ph.D.)

– M. Li (Ph.D.)

– S. Potluri (Ph.D.)

– R. Rajachandrasekhar (Ph.D.)

Past Students

– P. Balaji (Ph.D.)

– D. Buntinas (Ph.D.)

– S. Bhagvat (M.S.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

81

Past Research Scientist – S. Sur

Current Post-Docs

– X. Lu

– K. Hamidouche

Current Programmers

– M. Arnold

– J. Perkins

Past Post-Docs – H. Wang

– X. Besseron

– H.-W. Jin

– M. Luo

– W. Huang (Ph.D.)

– W. Jiang (M.S.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– M. Rahman (Ph.D.)

– D. Shankar (Ph.D.)

– R. Shir (Ph.D.)

– A. Venkatesh (Ph.D.)

– J. Zhang (Ph.D.)

– E. Mancini

– S. Marcarelli

– J. Vienne

Current Senior Research Associate

– H. Subramoni

Past Programmers

– D. Bureddy