Specifies the user's preference regarding use of the spind daemon for coscheduling. Values can be 0 (prefer no use) or 1 (prefer use). This preference may be overridden by the system administrator's policy. This policy is set in the hpc.conf file and can be 0 (forbid use), 1 (require use), or 2 (no policy). If no policy is set and no user preference is specified, coscheduling is not used.
If no user preference is specified, the value 2 will be shown when environment variables are printed with MPI_PRINTENV.
Limits the number of unexpected messages that can be queued from a particular connection. Once this quantity of unexpected messages has been received, polling the connection for incoming messages stops. The default value, 0, indicates that no limit is set. To limit flow, set the value to some integer greater than zero.
Ensures that all connections are established during initialization. By default, connections are established lazily. However, you can override this default by setting the environment variable MPI_FULLCONNINIT to 1, forcing full-connection initialization mode. The default value is 0.
The maximum number of Fortran handles for objects other than requests. MPI_MAXFHANDLES specifies the upper limit on the number of concurrently allocated Fortran handles for MPI objects other than requests. This variable is ignored in the default 32-bit library. The default value is 1024. Users should take care to free MPI objects that are no longer in use. There is no limit on handle allocation for C codes.
The maximum number of Fortran request handles. MPI_MAXREQHANDLES specifies the upper limit on the number of concurrently allocated MPI request handles. Users must take care to free up request handles by properly completing requests. The default value is 1024. This variable is ignored in the default 32-bit library.
The MPI collectives are implemented using a variety of optimizations. Some of these optimizations can inhibit performance of point-to-point messages for "unsafe" programs. By default, this variable is 1, and optimized collectives are used. The optimizations can be turned off by setting the value to 0.
Defines the maximum number of stripes that can be used during communication via remote shared memory. The default value is the number of stripes in the cluster, with a maximum default of 2.
On SMPs, the implementation of MPI_Bcast() for large messages is done using a double-buffering scheme. The size of each buffer (in bytes) is settable by using this environment variable. The default value is 32768 bytes.
The amount of memory available, in bytes, to the general buffer pool for use by collective operations. The default value is 20971520 bytes.
On SMPs, calling MPI_Reduce() causes all processors to participate in the reduce. Each processor will work on a piece of data equal to the MPI_SHM_REDUCESIZE setting. The default value is 256 bytes. Care must be taken when setting this variable because the system reserves MPI_SHM_REDUCESIZE * np * np memory to execute the reduce.
When coscheduling is enabled, limits the length of time (in milliseconds) a message will remain in the poll waiting for the spind daemon to return. If the timeout occurs before the daemon finds any messages, the process re-enters the polling loop. The default value is 1000 ms. A default can also be set by a system administrator in the hpc.conf file.
Sets the number of times MPI_TCP_CONNTIMEOUT occurs before signaling an error. The default value for this variable is 0, meaning that the program will abort on the first occurrence of MPI_TCP_CONNTIMEOUT.
Sets the timeout value in seconds that is used for an accept() call. The default value for this variable is 600 seconds (10 minutes). This timeout can be triggered in both full- and lazy-connection initialization. After the timeout is reached, a warning message will be printed. If MPI_TCP_CONNLOOP is set to 0, then the first timeout will cause the program to abort.
Allows use of a congestion-avoidance algorithm for MPI_Gather() and MPI_Gatherv() over TCP. By default, MPI_TCP_SAFEGATHER is set to 1, which means use of this algorithm is on. If you know that your underlying network can handle gathering large amounts of data on a single node, you may want to override this algorithm by setting MPI_TCP_SAFEGATHER to 0.