ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21,...

12
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming MPI

Transcript of ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21,...

Page 1: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013

Quiz Questions

ITCS 4145/5145Parallel Programming

MPI

Page 2: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

What is the name of the default MPI communicator?

a) DEF_MPI_COMM_WORLD

b) It has no name.

c) DEFAULT_COMMUNICATOR

d) COMM_WORLD

e) MPI_COMM_WORLD

Page 3: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

What does the MPI routine MPI_Comm_rank() do?

a) It compares the supplied process ID with that of the process and returns TRUE or FALSE.

b) It returns an integer that is number of processes in the specifed communicator. The number is returned as an agument.

c) It converts the Linux process ID to a unique integer from zero onwards.

d) It returns an integer that is the rank of the process in the specifed communicator. The integer is returned as an agument.

e) It returns the priority number of the process from highest (0) downwards.

Page 4: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

Name one MPI routine that does not have a named communicator as a parameter

(argument).

a) MPI_Send()

b) MPI_Bcast()

c) MPI_Init()

d) MPI_Barrier()

e) None - they all have a named communicator as a parameter.

f) none of the other answers

Page 5: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

What is the purpose of a message tag in MPI?

a) To provide a mechanism to differentiate between message-passing routines written by different programmers

b) To count the number of characters in a message

c) To indicate the type of message

d) To provide a matching mechanism differentiating between message sent from one process to another process

Page 6: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

When does the MPI routine MPI_Recv() return?

a) After the arrival of the message the routine is waiting for but before the data has been collected.

b) Never

c) Immediately

d) After a time specified in the routine.

e) After the arrival of message the routine is waiting for and the data collected.

Page 7: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

What is meant by a blocking message passing routine in MPI?

a) The routine returns when all the local actions are complete but the message transfer may not have completed.

b) The routine returns immediately but the message transfer may not have completed.

c) The routine returns when the message transfer has completed.

d) The routine blocks all actions on other processes until it has completed its actions.

e) None of the other answers.

Page 8: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

What is meant by a non-blocking (or asynchronous) message passing routine in MPI?

a) The routine returns when all the local actions are complete but the message transfer may not have completed.

b) The routine returns immediately but the message transfer may not have completed.

c) The routine returns when the message transfer has completed.

d) The routine blocks all actions on other processes until it has completed its actions.

e) None of the other answers.

Page 9: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

In the routine:

MPI_Send(message,13,MPICHAR,x,10, MPI_COMM_WORLD);

when can x be altered without affecting the message being transferred?

a) Never.

b) After the routine returns, i.e. in subsequent statements

c) Anytime

d) When the message has been received

e) None of the other answers

Page 10: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

What does the routineMPI_Wtime() do?

a) Waits a specific time before returning as given by an argument.

b) Returns the elapsed time from some point in the past, in seconds.

c) Returns the elapsed time from the beginning of the program execution, in seconds.

d) Returns the time of the process execution.

e) Returned the actual time of day

f) None of the other answers.

Page 11: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

Under what circumstance might an MPI_Send() operate as an MPI_Ssend()?

a) If the available message buffer space becomes exhausted.

b) If you specify more than a thousand bytes in the message.

c) If the tags do not match.

d) When the "synch" parameter is set in the parameter list of MPI_Send()

e) Never

Page 12: ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

What does the MPI routine MPI_Barrier() do?

a) Waits for all messages to be sent and received.

b) Will cause processes to wait for all processes within the specific communicator to call the routine. Then all processes send a message to the master process and continue.

c) Makes a process to execute slower to allow debugging

d) Waits for a specified amount of time.

e) Will cause processes after calling MPI_Barrier() to wait for all processes within the specific communicator to call the routine. Then all processes are released and are allowed to continue.