We can adapt our example above to support the I/O programming style that best suits our application. Essentially, there are three dimensions on which to choose an appropriate data access routine for your particular task: file pointer type, collective or noncollective, and blocking or nonblocking.
We need to choose which file pointer type to use: explicit, individual, or shared. In the example above, we used an explicit pointer and passed it directly as the offset parameter to the MPI_File_write_at and MPI_File_read_at routines. Using an explicit pointer is equivalent to calling MPI_File_seek to set the individual file pointer to offset, then calling MPI_File_write or MPI_File_read, which is directly analogous to calling UNIX lseek() and write() or read(). If each process accesses the file sequentially, individual file pointers save you the effort to recalculate offset for each data access. We would use a shared file pointer in situations where all the processes need to cooperatively access a file in a sequential way, for example, writing log files.
Collective data-access routines allow the user to enforce some implicit coordination among the processes in a parallel job when making data accesses. For example, if a parallel job alternately reads in a matrix and performs computation on it, but cannot progress to the next stage of computation until all processes have completed the last stage, then a coordinated effort between processes when accessing data might be more efficient. In the example above, we could easily append the suffix _all to MPI_File_write_at and MPI_File_read_at to make the accesses collective. By coordinating the processes, we could achieve greater efficiency in the MPI library or at the file system level in buffering or caching the next matrix. In contrast, noncollective accesses are used when it is not evident that any benefit would be gained by coordinating disparate accesses by each process. UNIX file accesses are noncollective.