Collective communication routines are blocking routines that involve all processes in a communicator. Collective communication includes broadcasts and scatters, reductions and gathers, all-gathers and all-to-alls, scans, and a synchronizing barrier call.
Table 2-1 Collective Communication Routines
MPI_Bcast |
Broadcasts from one process to all others in a communicator. |
MPI_Scatter |
Scatters from one process to all others in a communicator. |
MPI_Reduce |
Reduces from all to one in a communicator. |
MPI_Allreduce |
Reduces, then broadcasts result to all nodes in a communicator. |
MPI_Reduce_scatter |
Scatters a vector that contains results across the nodes in a communicator. |
MPI_Gather |
Gathers from all to one in a communicator. |
MPI_Allgather |
Gathers, then broadcasts the results of the gather in a communicator. |
MPI_Alltoall |
Performs a set of gathers in which each process receives a specific result in a communicator. |
MPI_Scan |
Scans (parallel prefix) across processes in a communicator. |
MPI_Barrier |
Synchronizes processes in a communicator (no data is transmitted). |
Many of the collective communication calls have alternative vector forms, with which different amounts of data can be sent to or received from different processes.
The syntax and semantics of these routines are basically consistent with the point-to-point routines (upon which they are built), but there are restrictions to keep them from getting too complicated:
The amount of data sent must exactly match the amount of data specified by the receiver.
There is only one mode, a mode analogous to the standard mode of point-to-point routines.