Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt,...

28
Lecture on Scientific Computing Dr. Kersten Schmidt Lecture 21 Technische Universit¨ at Berlin Institut f¨ ur Mathematik Wintersemester 2014/2015

Transcript of Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt,...

Page 1: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Lecture on

Scientific Computing

Dr. Kersten Schmidt

Lecture 21

Technische Universitat BerlinInstitut fur Mathematik

Wintersemester 2014/2015

Page 2: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Syllabus

I Linear Regression, Fast Fourier transformI Modelling by partial differential equations (PDEs)

I Maxwell, Helmholtz, Poisson, Linear elasticity, Navier-Stokes equationI boundary value problem, eigenvalue problemI boundary conditions (Dirichlet, Neumann, Robin)I handling of infinite domains (wave-guide, homogeneous exterior: DtN, PML)I boundary integral equations

I Computer aided-design (CAD)

I Mesh generatorsI Space discretisation of PDEs

I Finite difference methodI Finite element methodI Discontinuous Galerkin finite element method

I SolversI Linear Solvers (direct, iterative), preconditionerI Nonlinear Solvers (Newton-Raphson iteration)I Eigenvalue Solvers

I ParallelisationI Computer hardware (SIMD, MIMD: shared/distributed memory)I Programming in parallel: OpenMP, MPI

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 2

Page 3: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Generic distributed memory computer

Clusters with partly distributed memory and shared memory

A process is an instance of a program that is executing more or less autonomously ona physical processor.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 3

Page 4: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Message passing

Communication on parallel computers with distributed memory (multicomputers) ismost commonly done by message passing.

Processes coordinate their activities by explicitely sending and receiving messages.

Assume that (as in MPI) processes are statically allocated. That is, the number ofprocessors is set at the beginning of the program execution, and no further processesare created during execution.

There is usually one process executing on one processor.

Each process is assigned a unique integer rank in the range 0, 1, . . . , p − 1, where p isthe number of processes.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 4

Page 5: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

The Message Passing Interface: MPILike OpenMP for shared memory programming, MPI is an application programmerinterface to message passing.

MPI extends programming languages (like C/C++ or Fortran) with a library offunctions for point-to-point and collective communication and additional functions formanaging the processes at the computation and for querying their status.

MPI has become a de facto standard for message passing on multicomputers.

Standardization by the MPI forum (http://www.mpi-forum.org)

Implementations:

I MPICH: http://www.mpich.org

I Open MPI: http://www.open-mpi.org

On clusters with many processors,

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 5

Page 6: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

First MPI demo program in C: mpi1st.c#include <stdio.h>#include <mpi.h>

int main(int argc, char* argv[]) {

int rank; /* rank of process */int p; /* number of processes */

MPI_Init(&argc, &argv); /* Start up MPI */MPI_Comm_rank(MPI_COMM_WORLD, &rank); /* Find out proc rank */MPI_Comm_size(MPI_COMM_WORLD, &p); /* Find out number of

processes */

printf("Proc %d from %d is ready.\n", rank, p);

MPI_Finalize(); /* Shut down MPI */

}

Calling

% mpicc mpi1st.c -o mpi1st% mpirun -np 4 mpi1st

Proc 1 from 4 is ready.Proc 2 from 4 is ready.Proc 3 from 4 is ready.Proc 0 from 4 is ready.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 6

Page 7: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Simple send and receive commands

A typical usage of sending and receiving is given by the following example, whereprocess 0 sends a single float x to process 1.

Process 0 executes

MPI_Send(&x, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD);

while process 1 executes

MPI_Recv(&x, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &status);

Process 0 and process 1 execute different statements. However, the single-programmulti-data (SPMD) programming model permits individual processes executingdifferent statements by means of conditional branches.

float x = 0;

if (rank == 0) {x = 1; /* e.g. read from an input file */MPI_Send(&x, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD);

} else if (rank == 1)MPI_Recv(&x, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &status);

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 7

Page 8: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Simple send and receive commandsint MPI_Send(void* buffer /* in */,

int count /* in */,MPI_Datatype datatype /* in */int destination /* in */,int tag /* in */,MPI_Comm communicator /* in */)

int MPI_Recv(void* buffer /* out */,int count /* in */,MPI_Datatype datatype /* in */int destination /* in */,int tag /* in */,MPI_Comm communicator /* in */,MPI_Status* status /* out */)

The communicators of MPI_Send and MPI_Recv have to match. The communicator indicatesa collection of processes that can send messages to each other.The predefined communicator MPI_COMM_WORLD denotes the set of all processes thatparticipate at the computation.

Communicators are an important tool when writing library routines. Defining a communicator

that is known only to these routines, messages issued by these routines cannot mix up with

messages sent in other parts of the program (even if tags are identical).,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 8

Page 9: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Simple send and receive commands

The tag in the above example is 0. The tag of the send message must match theexpected tag of the receive call. The tag is (can be) used to avoid confusion if severalmessages are communicated between sender and receiver, e.g., in an iteration.

Receiver can wildcard.To receive from any source: use MPI_ANY_SOURCE.To receive with any tag: use MPI_ANY_TAG.

The status of MPI_recv returns information on the data that was actually received.status is a (pointer to a) C structure with (at least) three members.

status -> MPI_SOURCE

status -> MPI_TAG

If, e.g., tag or source have been set to be a wildcard, thenstatus->MPI_SOURCE and status->MPI_TAG return theactual values of these parameters.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 9

Page 10: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Another demo program in C: greetings.c#include <stdio.h>#include <mpi.h>

int main(int argc, char* argv[]) {

int rank; /* rank of process */int p; /* number of processes */char message[100]; /* storage of the message */MPI_Status status; /* return status for receiver */

MPI_Init(&argc, &argv);MPI_Comm_rank(MPI_COMM_WORLD, &rank);MPI_Comm_size(MPI_COMM_WORLD, &p);

if (rank == 0) {sprintf(message, "Greetings from process %d!", rank);MPI_Send(message, strlen(message) + 1, MPI_CHAR, 0, 0, MPI_COMM_WOLRD);

} elsefor (int source = 1; source < p; ++source) {

MPI_Recv(message, 100, MPI_CHAR, source, 0, MPI_COMM_WOLRD, &status);printf("%s\n", message);

}

MPI_Finalize();

} /* main */,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 10

Page 11: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Communication modesWhat happens if a send or receive command is issued ?

Standard communication : MPI_Send, MPI_RecvI Sending and receiving is asynchron, blocking communication.I MPI_Recv can be called before the associated MPI_Send is called.I MPI_Recv blocks until the message has been received completely, blocks forever

if message is not sent.I MPI_Send can also be called before the associated MPI_Recv is called.I MPI_Send blocks until the message is copied out of memory (wherever).I The message might be copied directly into the matching receive buffer, or it

might be copied into a temporary system buffer.

MPI offers the choice of several communication modes that allow one to control thechoice of the communication protocol.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 11

Page 12: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Communication modesWhat happens if a send or receive command is issued ?

Synchronous communication (avoid buffering) : MPI_Ssend, MPI_Srecv

I Sender waits until receiver is ready.

I Then the system copies from memory to memory.

I Both processors block forever if messages are not awaited or sent: deadlock, e.g.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 12

Page 13: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Communication modes

Non-blocking communication : MPI_Isend, MPI_IrecvI Messages are copied into a system buffer on the sender or receiver side (or both).I MPI_Irecv returns immediately, even if nothing has been received yet.

int MPI_Irecv(...MPI_Comm communicator /* in */,MPI_Request* request /* out */)

I Additional output parameter requestI to check whether a message actually has been received (i.e. has been copied in

memory)int MPI_Test(MPI_Request* request /* in */,

int* flag /* out */,MPI_Status* status /* out */)

I Instead of waiting of receiving data from other processors some other calculationscan be performed in meantime.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 13

Page 14: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Communication modes

Non-blocking communication : MPI_Isend, MPI_Irecv

I Messages are copied into a system buffer on the sender or receiver side (or both).

I MPI_Irecv returns immediately, even if nothing has been received yet.int MPI_Irecv(...

MPI_Comm communicator /* in */,MPI_Request* request /* out */)

I Additional output parameter request.

I If, finally, one wants to wait until receiving the message, i.e., switching it to ablocking one, use

int MPI_Wait(MPI_Request* request /* in */,MPI_Status* status /* out */)

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 14

Page 15: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Collective modes

Let’s assume that process 0 reads some input data that it needs to make available toall other processes in the group. We know how process 0 could proceed:

for (dest = 1; dest < p; ++dest)MPI_Send(data, size, MPI_INT, dest, tag, MPI_COMM_WOLRD);

In this approach p − 1 messages are sent, all with the same sender.

We know that there are more elegant (and in general more efficient) algorithms to dothe above: a tree-structured algorithm (remember the hypercube)

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 15

Page 16: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Broadcast implements the tree-structured algorithm

int MPI_Bcast(void* message /* in/out */,int count /* in */,MPI_Datatype datatype /* in */,int root /* in */,MPI_Comm communicator /* in */)

Message data is sent from the source process (root) to all other processes.

So, data is input data in the source process and output data otherwise.

Remark: There is no tag, as broadcasts have been used historically for synchronization.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16

Page 17: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Reduction: as OpenMP, MPI provides a reduction function (that uses atree-structured algorithm)

int MPI_Reduce(void* operand /* in */,void* result /* out */,int count /* in */,MPI_Datatype datatype /* in */,MPI_Op operator /* in */,int root /* in */,MPI_Comm communicator /* in */)

I combines the operands and stores the result in *result in process root.

I Both operand and result refer to count memory locations with data typedatatype.

I MPI_Reduce must be called by all processes in the communicator comm, andcount, datatype, operator, and root must be the same in each invocation.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 17

Page 18: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Operations for MPI_Reduce

Operation name Meaning

MPI_MAX MaximumMPI_MIN MinimumMPI_SUM SumMPI_PROD ProductMPI_LAND Logical andMPI_BAND Bitwise andMPI_LOR Logical orMPI_BOR Bitwise orMPI_LXOR Logical exclusive orMPI_BXOR Bitwise exclusive orMPI_MAXLOC Maximum and location of maximumMPI_MINLOC Minimum and location of minimum

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 18

Page 19: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Example: dot product

Serial versionfloat Serial_dot(

float x[] /* in */,floag y[] /* in */,int n /* in */) {

float sum = 0.0;

for (i = 0; i < n; ++i)sum = sum + x[i] * y[i];

return sum;}

Parallel versionfloat sum = 0.0;float local_sum = Serial_dot(local_x, local_y, local_n);

MPI_Reduce(&local_sum, &sum, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);

Use MPI_Allreduce instead of MPI_Reduce and all processes get the result (combination of

MPI_Reduce and MPI_Bcast). There is no parameter root for MPI_Allreduce.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 19

Page 20: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Matrix-vector multiplication

Let’s consider what integrients we need to do a matrix-vector multiplication, ~y = A~x .For simplicity we assume that A is a n × n square matrix. Then,

yk =

n−1∑i=0

akixi , 0 ≤ k < n.

In OpenMP we could parallize this by

#pragma omp parallelfor(k = 0; k < n; ++k) {

y[k] = 0.0;for(i = 0; i < n; ++i) {

y[k] = y[k] + a[k,i] * x[i];}

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 20

Page 21: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

As the outer-most loop is parallized and we access the matrix row-wise, we canvisualise the matrix-vector product as follows.

This is not quite correct as all processes access all of ~y .

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 21

Page 22: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

How can we do this in a distributed memory environment ?

Let us assume that the matrix A and the vectors ~x , ~y are distributed in the block-wisefashion as displayed on the previous page. Let

~xk ∈ Rm, ~yk ∈ Rm,Ak ∈ Rm×n, m =n

p, 0 ≤ k < p,

be portions of ~x , ~y , and A, respectively, stored in the process k (usually on processor k).Then,

~yk = Ak~x .

Thus, each element of the vector ~y is the result of the inner product of a row of Awith the vector ~x .

In order to form the inner product of each row of A with ~x we either have to gatherall of ~x onto each process or we have to scatter each (block-)row of A across theprocesses.

In our previous OpenMP code the former has been done. If we had parallelized theinner loop then the latter approach had been taken.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 22

Page 23: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Gather (sammeln) vector, take parts from all processors and send to one

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 23

Page 24: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

In MPI gathering vector ~x on process 0 can be done by the call

/* Space allocated in calling program */float local_x[]; /* Local storage for x */float global_x[]; /* Storage for all x */

MPI_Gather(local_x, n/p, MPI_FLOAT,global_x, n/p, MPI_FLOAT,0, MPI_COMM_WORLD);

The syntax is

MPI_Gather(void* send_data /* in */,int send_count /* in */,MPI_Datatype send_type /* in */,void* recv_data /* out */,int recv_count /* in */MPI_Datatype recv_type /* in */,int dest /* in */,MPI_Comm comm /* in */)

MPI_Gather: Collecting pieces of a distributed vector on a single processor.MPI_Allgather: Collecting pieces of a distributed vector on all processors.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 24

Page 25: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

The alternative to gathering vector ~x is to scatter matrix A.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 25

Page 26: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

The syntax of MPI_Scatter is

int MPI_Scatter(void* send_data /* in */,int send_count /* in */,MPI_Datatype send_type /* in */,void* recv_data /* out */,int recv_count /* in */MPI_Datatype recv_type /* in */,int origin /* in */,MPI_Comm comm /* in */)

MPI_Scatter splits the data referenced by send_data on the process with rankorigin in p segments, each of which consists of send_count elements of typesend_type. The first segment is sent to process 0, the second to process 1, etc.

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 26

Page 27: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

If A is stored block-wise, we have

Here,

~xk ∈ Rm, ~yk ∈ Rm, Ak ∈ Rn×m

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 27

Page 28: Lecture on Scienti c Computing - TU Berlin · VL Scienti c Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 16 Distributed-memory parallel programming Reduction: as OpenMP, MPI

Distributed-memory parallel programming

Formally, we have to do the following

1. Compute (locally) ~y k = Ak~xx , 0 ≤ k < p

2. Reduce ~y =∑p−1

k=0 ~yk with root process 0 (for example),

3. Scatter ~y on the p processes.

The last steps can be combined by the call MPI_Reduce_Scatter

,

VL Scientific Computing WS 2014/2015, Dr. K. Schmidt, 02/03/2015 28