Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

19
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce

Transcript of Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Page 1: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Introduction to Parallel Programming

with C and MPI at MCSR

Part 2

Broadcast/Reduce

Page 2: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Collective Message Passing

• Broadcast– Sends a message from one to all processes in the group

• Scatter– Distributes each element of a data array to a different

process for computation

• Gather– The reverse of scatter…retrieves data elements into an

array from multiple processes

Page 3: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Collective Message Passing w/MPI

MPI_Bcast() Broadcast from root to all other processes

MPI_Gather() Gather values for group of processes

MPI_Scatter() Scatters buffer in parts to group of processes

MPI_Alltoall() Sends data from all processes to all processes

MPI_Reduce() Combine values on all processes to single val

MPI_Reduce_Scatter() Broadcast from root to all other processes

MPI_Bcast() Broadcast from root to all other processes

Page 4: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Log in to mimosa & get workshop files

A. Use secure shell to login to mimosa using your assigned training account:

ssh [email protected] [email protected]

See lab instructor for password.

B. Copy workshop files into your home directory by running: /usr/local/apps/ppro/prepare_mpi_workshop

Page 5: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine, compile, and execute add_mpi.c

Page 6: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine, compile, and execute add_mpi.c

Page 7: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine, compile, and execute add_mpi.c

Page 8: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine, compile, and execute add_mpi.c

Page 9: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine, compile, and execute add_mpi.c

Page 10: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine, compile, and execute add_mpi.c

Page 11: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine add_mpi.pbs

Page 12: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Submit PBS Script: add_mpi.pbs

Page 13: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Examine Output and Errors add_mpi.c

Page 14: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Determine Speedup

Page 15: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Determine Parallel Efficiency

Page 16: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

How Could Speedup/Efficiency Improve?

Page 17: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

What Happens to ResultsWhen MAXSIZE NotEvenly Divisible by n?

Page 18: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Exercise 1:Change Code to Work When

MAXSIZE is Not EvenlyDivisible by n

Page 19: Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.

Exercise 2:Change Code to Improve Speedup