This chapter presents an overview of multiprocessor parallelization and describes the capabilities of Fortran 95 on Solaris SPARC and x86 multiprocessor platforms.
See also Techniques for Optimizing Applications: High Performance Computing by Rajat Garg and Ilya Sharapov, a Sun Microsystems BluePrints publication (http://www.sun.com/blueprints/pubs.html)
Parallelizing (or multithreading) an application compiles the program to run on a multiprocessor system or in a multithreaded environment. Parallelization enables a single task, such as a DO loop, to run over multiple processors (or threads) with a potentially significant execution speedup.
Before an application program can be run efficiently on a multiprocessor system like the UltraTM 60, Sun EnterpriseTM Server 6500, or Sun Enterprise Server 10000, it needs to be multithreaded. That is, tasks that can be performed in parallel need to be identified and reprogrammed to distribute their computations across multiple processors or threads.
Multithreading an application can be done manually by making appropriate calls to the libthread primitives. However, a significant amount of analysis and reprogramming might be required. (See the Solaris Multithreaded Programming Guide for more information.)
Sun compilers can automatically generate multithreaded object code to run on multiprocessor systems. The Fortran compilers focus on DO loops as the primary language element supporting parallelism. Parallelization distributes the computational work of a loop over several processors without requiring modifications to the Fortran source program.
The choice of which loops to parallelize and how to distribute them can be left entirely up to the compiler (-autopar), specified explicitly by the programmer with source code directives (-explicitpar), or done in combination (-parallel).
Programs that do their own (explicit) thread management should not be compiled with any of the compiler’s parallelization options. Explicit multithreading (calls to libthread primitives) cannot be combined with routines compiled with these parallelization options.
Not all loops in a program can be profitably parallelized. Loops containing only a small amount of computational work (compared to the overhead spent starting and synchronizing parallel tasks) may actually run more slowly when parallelized. Also, some loops cannot be safely parallelized at all; they would compute different results when run in parallel due to dependencies between statements or iterations.
Implicit loops (IF loops and Fortran 95 array syntax, for example) as well as explicit DO loops are candidates for automatic parallelization by the Fortran compilers.
f95 can detect loops that might be safely and profitably parallelized automatically. However, in most cases, the analysis is necessarily conservative, due to the concern for possible hidden side effects. (A display of which loops were and were not parallelized can be produced by the -loopinfo option.) By inserting source code directives before loops, you can explicitly influence the analysis, controlling how a specific loop is (or is not) to be parallelized. However, it then becomes your responsibility to ensure that such explicit parallelization of a loop does not lead to incorrect results.
The Fortran 95 compiler provides explicit parallelization by implementing the OpenMP 2.0 Fortran API directives. For legacy programs, f95 also accepts the older Sun and Cray style directives, but use of these directives is now deprecated. OpenMP has become an informal standard for explicit parallelization in Fortran 95, C, and C++ and is recommended over the older directive styles.
For information on OpenMP, see the OpenMP API User’s Guide, or the OpenMP web site at http://www.openmp.org.
If you parallelize a program so that it runs over four processors, can you expect it to take (roughly) one fourth the time that it did with a single processor (a fourfold speedup)?
Probably not. It can be shown (by Amdahl’s law) that the overall speedup of a program is strictly limited by the fraction of the execution time spent in code running in parallel. This is true no matter how many processors are applied. In fact, if p is the percentage of the total program execution time that runs in parallel mode, the theoretical speedup limit is 100/(100–p); therefore, if only 60% of a program’s execution runs in parallel, the maximum increase in speed is 2.5, independent of the number of processors. And with just four processors, the theoretical speedup for this program (assuming maximum efficiency) would be just 1.8 and not 4. With overhead, the actual speedup would be less.
As with any optimization, choice of loops is critical. Parallelizing loops that participate only minimally in the total program execution time has only minimal effect. To be effective, the loops that consume the major part of the runtime must be parallelized. The first step, therefore, is to determine which loops are significant and to start from there.
Problem size also plays an important role in determining the fraction of the program running in parallel and consequently the speedup. Increasing the problem size increases the amount of work done in loops. A triply nested loop could see a cubic increase in work. If the outer loop in the nest is parallelized, a small increase in problem size could contribute to a significant performance improvement (compared to the unparallelized performance).
Here is a very general outline of the steps needed to parallelize an application:
Optimize. Use the appropriate set of compiler options to get the best serial performance on a single processor.
Profile. Using typical test data, determine the performance profile of the program. Identify the most significant loops.
Benchmark. Determine that the serial test results are accurate. Use these results and the performance profile as the benchmark.
Parallelize. Use a combination of options and directives to compile and build a parallelized executable.
Verify. Run the parallelized program on a single processor and single thread and check results to find instabilities and programming errors that might have crept in. (Set $PARALLEL or $OMP_NUM_THREADS to 1; see 10.1.5 Number of Threads).
Test. Make various runs on several processors to check results.
Benchmark. Make performance measurements with various numbers of processors on a dedicated system. Measure performance changes with changes in problem size (scalability).
Repeat steps 4 to 7. Make improvements to your parallelization scheme based on performance.
Not all loops are parallelizable. Running a loop in parallel over a number of processors usually results in iterations executing out of order. Moreover, the multiple processors executing the loop in parallel may interfere with each other whenever there are data dependencies in the loop.
Situations where data dependence issues arise include recurrence, reduction, indirect addressing, and data dependent loop iterations.
You might be able to rewrite a loop to eliminate data dependencies, making it parallelizable. However, extensive restructuring could be needed.
Some general rules are:
A loop is data independent only if all iterations write to distinct memory locations.
Iterations may read from the same locations as long as no one iteration writes to them.
These are general conditions for parallelization. The compilers’ automatic parallelization analysis considers additional criteria when deciding whether to parallelize a loop. However, you can use directives to explicitly force loops to be parallelized, even loops that contain inhibitors and produce incorrect results.
Variables that are set in one iteration of a loop and used in a subsequent iteration introduce cross-iteration dependencies, or recurrences. Recurrence in a loop requires that the iterations to be executed in the proper order. For example:
DO I=2,N A(I) = A(I-1)*B(I)+C(I) END DO |
requires the value computed for A(I) in the previous iteration to be used (as A(I-1)) in the current iteration. To produce correct results, iteration I must complete before iteration I+1 can execute.
Reduction operations reduce the elements of an array into a single value. For example, summing the elements of an array into a single variable involves updating that variable in each iteration:
DO K = 1,N SUM = SUM + A(I)*B(I) END DO |
If each processor running this loop in parallel takes some subset of the iterations, the processors will interfere with each other, overwriting the value in SUM. For this to work, each processor must execute the summation one at a time, although the order is not significant.
Certain common reduction operations are recognized and handled as special cases by the compiler.
Loop dependencies can result from stores into arrays that are indexed in the loop by subscripts whose values are not known. For example, indirect addressing could be order dependent if there are repeated values in the index array:
DO L = 1,NW A(ID(L)) = A(L) + B(L) END DO |
In the example, repeated values in ID cause elements in A to be overwritten. In the serial case, the last store is the final value. In the parallel case, the order is not determined. The values of A(L) that are used, old or updated, are order dependent.
The Sun Studio compilers support the OpenMP parallelization model natively as the primary parallelization model. For information on OpenMP parallelization, see the OpenMP API User’s Guide. Sun and Cray-style parallelization refer to legacy applications and are no longer supported by current Sun Studio compilers.
.
Table 10–1 Fortran 95 Parallelization Options
Option |
Flag |
---|---|
Automatic (only) |
-autopar |
Automatic and Reduction |
-autopar -reduction |
Show which loops are parallelized |
-loopinfo |
Show warnings with explicit |
-vpara |
Allocate local variables on stack |
-stackvar |
Compile for OpenMP parallelization |
-xopenmp |
Notes on these options:
Many of these options have equivalent synonyms, such as -autopar and -xautopar. Either may be used.
The compiler prof/gprof profiling options -p, -xpg, and -pg should not be used along with any of the parallelization options. The runtime support for these profiling options is not thread-safe. Invalid results or a segmentation fault could occur at runtime.
-reduction requires -autopar.
-autopar includes -depend and loop structure optimization.
-noautopar, -noreduction are the negations.
Parallelization options can be in any order, but they must be all lowercase.
Reduction operations are not analyzed in explicitly parallelized loops.
-xopenmp also invokes -stackvar automatically.
The options -loopinfo and -vpara must be used in conjunction with one of the parallelization options.
The PARALLEL (or OMP_NUM_THREADS) environment variable controls the maximum number of threads available to the program. Setting the environment variable tells the runtime system the maximum number of threads the program can use. The default is 1. In general, set the PARALLEL or OMP_NUM_THREADS variable to the number of available virtual processors on the target platform.
The following example shows how to set it:
demo% setenv OMP_NUM_THREADS 4 C shell |
-or-
demo$ OMP_NUM_THREADS=4 Bourne/Korn shell demo$ export OMP_NUM_THREADS |
In this example, setting PARALLEL to four enables the execution of a program using at most four threads. If the target machine has four processors available, the threads will map to independent processors. If there are fewer than four processors available, some threads could run on the same processor as others, possibly degrading performance.
The SunOSTM operating system command psrinfo(1M) displays a list of the processors available on a system:
demo% psrinfo 0 on-line since 03/18/2007 15:51:03 1 on-line since 03/18/2007 15:51:03 2 on-line since 03/18/2007 15:51:03 3 on-line since 03/18/2007 15:51:03 |
The executing program maintains a main memory stack for the initial thread executing the program, as well as distinct stacks for each helper thread. Stacks are temporary memory address spaces used to hold arguments and AUTOMATIC variables over subprogram invocations.
The default size of the main stack is about 8 megabytes. The Fortran compilers normally allocate local variables and arrays as STATIC (not on the stack). However, the -stackvar option forces the allocation of all local variables and arrays on the stack (as if they were AUTOMATIC variables). Use of -stackvar is recommended with parallelization because it improves the optimizer’s ability to parallelize subprogram calls in loops. -stackvar is required with explicitly parallelized loops containing subprogram calls. (See the discussion of -stackvar in the Fortran User’s Guide.)
Using the C shell (csh), the limit command displays the current main stack size as well as sets it:
demo% limit C shell example cputime unlimited filesize unlimited datasize 2097148 kbytes stacksize 8192 kbytes <- current main stack size coredumpsize 0 kbytes descriptors 64 memorysize unlimited demo% limit stacksize 65536 <- set main stack to 64Mb demo% limit stacksize stacksize 65536 kbytes |
With Bourne or Korn shells, the corresponding command is ulimit:
demo$ ulimit -a Korn Shell example time(seconds) unlimited file(blocks) unlimited data(kbytes) 2097148 stack(kbytes) 8192 coredump(blocks) 0 nofiles(descriptors) 64 vmemory(kbytes) unlimited demo$ ulimit -s 65536 demo$ ulimit -s 65536 |
Each helper thread of a multithreaded program has its own thread stack. This stack mimics the initial thread stack but is unique to the thread. The thread’s PRIVATE arrays and variables (local to the thread) are allocated on the thread stack. The default size is 8 megabytes on 64–bit SPARC and 64-bit x86 platforms, 4 megabytes otherwise. The size is set with the STACKSIZE environment variable:
demo% setenv STACKSIZE 8192 <- Set thread stack size to 8 Mb C shell -or- demo$ STACKSIZE=8192 Bourne/Korn Shell demo$ export STACKSIZE |
Setting the thread stack size to a value larger than the default may be necessary for some parallelized Fortran codes. However, it may not be possible to know just how large it should be, except by trial and error, especially if private/local arrays are involved. If the stack size is too small for a thread to run, the program will abort with a segmentation fault.
With the -autopar option, the f95 compiler automatically finds DO loops that can be parallelized effectively. These loops are then transformed to distribute their iterations evenly over the available processors. The compiler generates the thread calls needed to make this happen.
The compiler’s dependency analysis transforms a DO loop into a parallelizable task. The compiler may restructure the loop to split out unparallelizable sections that will run serially. It then distributes the work evenly over the available processors. Each processor executes a different chunk of iterations.
For example, with four CPUs and a parallelized loop with 1000 iterations, each thread would execute a chunk of 250 iterations:
Processor 1 executes iterations |
1 |
through |
250 |
Processor 2 executes iterations |
251 |
through |
500 |
Processor 3 executes iterations |
501 |
through |
750 |
Processor 4 executes iterations |
751 |
through |
1000 |
Only loops that do not depend on the order in which the computations are performed can be successfully parallelized. The compiler’s dependence analysis rejects from parallelization those loops with inherent data dependencies. If it cannot fully determine the data flow in a loop, the compiler acts conservatively and does not parallelize. Also, it may choose not to parallelize a loop if it determines the performance gain does not justify the overhead.
Note that the compiler always chooses to parallelize loops using a static loop scheduling—simply dividing the work in the loop into equal blocks of iterations. Other scheduling schemes may be specified using explicit parallelization directives described later in this chapter.
A few definitions, from the point of view of automatic parallelization, are needed:
An array is a variable that is declared with at least one dimension.
A pure scalar is a scalar variable that is not aliased—not referenced in an EQUIVALENCE or POINTER statement.
Example: Array/scalar:
dimension a(10) real m(100,10), s, u, x, z equivalence ( u, z ) pointer ( px, x ) s = 0.0 ... |
Both m and a are array variables; s is pure scalar. The variables u, x, z, and px are scalar variables, but not pure scalars.
DO loops that have no cross-iteration data dependencies are automatically parallelized by -autopar. The general criteria for automatic parallelization are:
Only explicit DO loops and implicit loops, such as IF loops and Fortran 95 array syntax are parallelization candidates.
The values of array variables for each iteration of the loop must not depend on the values of array variables for any other iteration of the loop.
Calculations within the loop must not conditionally change any pure scalar variable that is referenced after the loop terminates.
Calculations within the loop must not change a scalar variable across iterations. This is called a loop-carried dependence.
The amount of work within the body of the loop must outweigh the overhead of parallelization.
The compilers may automatically eliminate a reference that appears to create a data dependence in the loop. One of the many such transformations makes use of private versions of some of the arrays. Typically, the compiler does this if it can determine that such arrays are used in the original loops only as temporary storage.
Example: Using -autopar, with dependencies eliminated by private arrays:
parameter (n=1000) real a(n), b(n), c(n,n) do i = 1, 1000 <--Parallelized do k = 1, n a(k) = b(k) + 2.0 end do do j = 1, n-1 c(i,j) = a(j+1) + 2.3 end do end do end |
In the example, the outer loop is parallelized and run on independent processors. Although the inner loop references to array a appear to result in a data dependence, the compiler generates temporary private copies of the array to make the outer loop iterations independent.
Under automatic parallelization, the compilers do not parallelize a loop if:
The DO loop is nested inside another DO loop that is parallelized
Flow control allows jumping out of the DO loop
A user-level subprogram is invoked inside the loop
An I/O statement is in the loop
Calculations within the loop change an aliased scalar variable
In a multithreaded, multiprocessor environment, it is most effective to parallelize the outermost loop in a loop nest, rather than the innermost. Because parallel processing typically involves relatively large loop overhead, parallelizing the outermost loop minimizes the overhead and maximizes the work done for each thread. Under automatic parallelization, the compilers start their loop analysis from the outermost loop in a nest and work inward until a parallelizable loop is found. Once a loop within the nest is parallelized, loops contained within the parallel loop are passed over.
A computation that transforms an array into a scalar is called a reduction operation. Typical reduction operations are the sum or product of the elements of a vector. Reduction operations violate the criterion that calculations within a loop not change a scalar variable in a cumulative way across iterations.
Example: Reduction summation of the elements of a vector:
s = 0.0 do i = 1, 1000 s = s + v(i) end do t(k) = s |
However, for some operations, if reduction is the only factor that prevents parallelization, it is still possible to parallelize the loop. Common reduction operations occur so frequently that the compilers are capable of recognizing and parallelizing them as special cases.
Recognition of reduction operations is not included in the automatic parallelization analysis unless the -reduction compiler option is specified along with -autopar or -parallel.
If a parallelizable loop contains one of the reduction operations listed in Table 10–2, the compiler will parallelize it if -reduction is specified.
The following table lists the reduction operations that are recognized by the compiler.
Table 10–2 Recognized Reduction Operations
Mathematical Operations |
Fortran Statement Templates |
---|---|
Sum |
s = s + v(i) |
Product |
s = s * v(i) |
Dot product |
s = s + v(i) * u(i) |
Minimum |
s = amin( s, v(i)) |
Maximum |
s = amax( s, v(i)) |
OR |
do i = 1, n b = b .or. v(i) end do |
AND |
b = .true. do i = 1, n b = b .and. v(i) end do |
Count of non-zero elements |
k = 0 do i = 1, n if(v(i).ne.0) k = k + 1 end do |
All forms of the MIN and MAX function are recognized.
Floating-point sum or product reduction operations may be inaccurate due to the following conditions:
The order in which the calculations are performed in parallel is not the same as when performed serially on a single processor.
The order of calculation affects the sum or product of floating-point numbers. Hardware floating-point addition and multiplication are not associative. Roundoff, overflow, or underflow errors may result depending on how the operands associate. For example, (X*Y)*Z and X*(Y*Z) may not have the same numerical significance.
In some situations, the error may not be acceptable.
Example: Roundoff, get the sum of 100,000 random numbers between– 1 and +1:
demo% cat t4.f parameter ( n = 100000 ) double precision d_lcrans, lb / -1.0 /, s, ub / +1.0 /, v(n) s = d_lcrans ( v, n, lb, ub ) ! Get n random nos. between -1 and +1 s = 0.0 do i = 1, n s = s + v(i) end do write(*, ’(" s = ", e21.15)’) s end demo% f95 -O4 -autopar -reduction t4.f |
Results vary with the number of processors. The following table shows the sum of 100,000 random numbers between– 1 and +1.
Number of Processors |
Output |
---|---|
1 |
s = 0.568582080884714E+02 |
2 |
s = 0.568582080884722E+02 |
3 |
s = 0.568582080884721E+02 |
4 |
s = 0.568582080884724E+02 |
In this situation, roundoff error on the order of 10-14 is acceptable for data that is random to begin with. For more information, see the Sun Numerical Computation Guide.
This section describes the source code directives recognized by f95 to explicitly indicate which loops to parallelize and what strategy to use.
The Fortran 95 compiler now fully supports the OpenMP Fortran API as the primary parallelization model. See the OpenMP API User’s Guide for additional information..
Legacy Sun-style and Cray-style parallelization directives are no longer supported by Sun Studio compilers on SPARC platforms, and are not accepted by the compilers on x86 platforms.
Explicit parallelization of a program requires prior analysis and deep understanding of the application code as well as the concepts of shared-memory parallelization.
DO loops are marked for parallelization by directives placed immediately before them. Compile with -xopenmp to enable recognition of OpenMP Fortran 95 directives and generation of parallelized DO loop code. Parallelization directives are comment lines that tell the compiler to parallelize (or not to parallelize) the DO loop that follows the directive. Directives are also called pragmas.
Take care when choosing which loops to mark for parallelization. The compiler generates threaded, parallel code for all loops marked with parallelization directives, even if there are data dependencies that will cause the loop to compute incorrect results when run in parallel.
If you do your own multithreaded coding using the libthread primitives, do not use any of the compilers’ parallelization options—the compilers cannot parallelize code that has already been parallelized with user calls to the threads library.
A loop is appropriate for explicit parallelization if:
It is a DO loop, but not a DO WHILE or Fortran 95 array syntax.
The values of array variables for each iteration of the loop do not depend on the values of array variables for any other iteration of the loop.
If the loop changes a scalar variable, that variable’s value is not used after the loop terminates. Such scalar variables are not guaranteed to have a defined value after the loop terminates, since the compiler does not automatically ensure a proper storeback for them.
For each iteration, any subprogram that is invoked inside the loop does not reference or change values of array variables for any other iteration.
The DO loop index must be an integer.
A private variable or array is private to a single iteration of a loop. The value assigned to a private variable or array in one iteration is not propagated to any other iteration of the loop.
A shared variable or array is shared with all other iterations. The value assigned to a shared variable or array in an iteration is seen by other iterations of the loop.
If an explicitly parallelized loop contains shared references, then you must ensure that sharing does not cause correctness problems. The compiler does not synchronize on updates or accesses to shared variables.
If you specify a variable as private in one loop, and its only initialization is within some other loop, the value of that variable may be left undefined in the loop.
A subprogram call in a loop (or in any subprograms called from within the called routine) may introduce data dependencies that could go unnoticed without a deep analysis of the data and control flow through the chain of calls. While it is best to parallelize outermost loops that do a significant amount of the work, these tend to be the very loops that involve subprogram calls.
Because such an interprocedural analysis is difficult and could greatly increase compilation time, automatic parallelization modes do not attempt it. With explicit parallelization, the compiler generates parallelized code for a loop marked with a PARALLEL DO or DOALL directive even if it contains calls to subprograms. It is still the programmer’s responsibility to insure that no data dependencies exist within the loop and all that the loop encloses, including called subprograms.
Multiple invocations of a routine by different threads can cause problems resulting from references to local static variables that interfere with each other. Making all the local variables in a routine automatic rather than static prevents this. Each invocation of a subprogram then has its own unique store of local variables maintained on the stack, and no two invocations will interfere with each other.
Local subprogram variables can be made automatic variables that reside on the stack either by listing them on an AUTOMATIC statement or by compiling the subprogram with the -stackvar option. However, local variables initialized in DATA statements must be rewritten to be initialized in actual assignments.
Allocating local variables to the stack can cause stack overflow. See 10.1.6 Stacks, Stack Sizes, and Parallelization about increasing the size of the stack.
In general, the compiler parallelizes a loop if you explicitly direct it to. There are exceptions—some loops the compiler will not parallelize.
The following are the primary detectable inhibitors that might prevent explicitly parallelizing a DO loop:
The DO loop is nested inside another DO loop that is parallelized.
This exception holds for indirect nesting, too. If you explicitly parallelize a loop that includes a call to a subroutine, then even if you request the compiler to parallelize loops in that subroutine, those loops are not run in parallel at runtime.
A flow control statement allows jumping out of the DO loop.
The index variable of the loop is subject to side effects, such as being equivalenced.
By compiling with -vpara and -loopinfo, you will get diagnostic messages if the compiler detects a problem while explicitly parallelizing a loop.
The following table lists typical parallelization problems detected by the compiler:
Table 10–3 Explicit Parallelization Problems
Problem |
Parallelized |
Warning Message |
---|---|---|
Loop is nested inside another loop that is parallelized. |
No |
No |
Loop is in a subroutine called within the body of a parallelized loop. |
No |
No |
Jumping out of loop is allowed by a flow control statement. |
No |
Yes |
Index variable of loop is subject to side effects. |
Yes |
No |
Some variable in the loop has a loop-carried dependency. |
Yes |
Yes |
I/O statement in the loop—usually unwise, because the order of the output is not predictable. |
Yes |
No |
Example: Nested loops:
... !$OMP PARALLEL DO do 900 i = 1, 1000 ! Parallelized (outer loop) do 200 j = 1, 1000 ! Not parallelized, no warning ... 200 continue 900 continue ... |
Example: A parallelized loop in a subroutine:
program main ... !$OMP PARALLEL DO do 100 i = 1, 200 <-parallelized ... call calc (a, x) ... 100 continue ... subroutine calc ( b, y ) ... !$OMP PARALLEL DO do 1 m = 1, 1000 <-not parallelized ... 1 continue return end |
In the example, the loop within the subroutine is not parallelized because the subroutine itself is run in parallel.
Example: Jumping out of a loop:
!$omp parallel do do i = 1, 1000 ! <- Not parallelized, error issued ... if (a(i) .gt. min_threshold ) go to 20 ... end do 20 continue ... |
The compiler issues an error diagnostic if there is a jump outside a loop marked for parallelization.
Example: A variable in a loop has a loop-carried dependency:
demo% cat vpfn.f real function fn (n,x,y,z) real y(*),x(*),z(*) s = 0.0 !$omp parallel do private(i,s) shared(x,y,z) do i = 1, n x(i) = s s = y(i)*z(i) enddo fn=x(10) return end demo% f95 -c -vpara -loopinfo -openmp -O4 vpfn.f "vpfn.f", line 5: Warning: the loop may have parallelization inhibiting reference "vpfn.f", line 5: PARALLELIZED, user pragma used |
Here the loop is parallelized but the possible loop carried dependency is diagnosed in a warning. However, be aware that not all loop dependencies can be diagnosed by the compiler.
You can do I/O in a loop that executes in parallel, provided that:
It does not matter that the output from different threads is interleaved (program output is nondeterministic.)
You can ensure the safety of executing the loop in parallel.
Example: I/O statement in loop
!$OMP PARALLEL DO PRIVATE(k) do i = 1, 10 ! Parallelized k = i call show ( k ) end do end subroutine show( j ) write(6,1) j 1 format(’Line number ’, i3, ’.’) end demo% f95 -openmp t13.f demo% setenv PARALLEL 4 demo% a.out |
Line number 9. Line number 4. Line number 5. Line number 6. Line number 1. Line number 2. Line number 3. Line number 7. Line number 8. |
However, I/O that is recursive, where an I/O statement contains a call to a function that itself does I/O, will cause a runtime error.
OpenMP is a parallel programming model for multi-processor platforms that is becoming standard programming practice for Fortran 95, C, and C++ applications. It is the preferred parallel programming model for Sun Studio compilers.
To enable OpenMP directives, compile with the -openmp option flag. Fortran 95 OpenMP directives are identified with the comment-like sentinel !$OMP followed by the directive name and subordinate clauses.
The !$OMP PARALLEL directive identifies the parallel regions in a program. The !$OMP DO directive identifies DO loops within a parallel region that are to be parallelized. These directives can be combined into a single !$OMP PARALLEL DO directive that must be placed immediately before the DO loop.
The OpenMP specification includes a number of directives for sharing and synchronizing work in a parallel region of a program, and subordinate clauses for data scoping and control.
One major difference between OpenMP and legacy Sun-style directives is that OpenMP requires explicit data scoping as either private or shared, but and automatic scoping feature is provided.
For more information, including guidelines for converting legacy programs using Sun and Cray parallelization directives, see the OpenMP API User’s Guide.
There are a number of environment variables used with parallelization: OMP_NUM_THREADS, SUNW_MP_WARN, SUNW_MP_THR_IDLE, SUNW_MP_PROCBIND, STACKSIZE, and others. They are described in the OpenMP API User’s Guide.
real x / 1.0 /, y / 0.0 / print *, x/y end character string*5, out*20 double precision value external exception_handler i = ieee_handler(’set’, ’all’, exception_handler) string = ’1e310’ print *, ’Input string ’, string, ’ becomes: ’, value print *, ’Value of 1e300 * 1e10 is:’, 1e300 * 1e10 i = ieee_flags(’clear’, ’exception’, ’all’, out) end integer function exception_handler(sig, code, sigcontext) integer sig, code, sigcontext(5) print *, ’*** IEEE exception raised!’ return end |
Runtime output:
Debugging parallelized programs requires some extra effort. The following schemes suggest ways to approach this task.
There are some steps you can try immediately to determine the cause of errors.
Turn off parallelization.
You can do one of the following:
Turn off the parallelization options—Verify that the program works correctly by compiling with -O3 or -O4, but without any parallelization.
Set the number of threads to one and compile with parallelization on—run the program with the environment variable PARALLEL set to 1.
If the problem disappears, then you can assume it was due to using multiple threads.
Check also for out of bounds array references by compiling with -C.
Problems using automatic parallelization with —autopar may indicate that the compiler is parallelizing something it should not.
Turn off -reduction.
If you are using the —reduction option, summation reduction may be occurring and yielding slightly different answers. Try running without this option.
Use fsplit.
If you have many subroutines in your program, use fsplit(1) to break them into separate files. Then compile some files with and without —autopar.
Execute the binary and verify results.
Repeat this process until the problem is narrowed down to one subroutine.
Use -loopinfo.
Check which loops are being parallelized and which loops are not.
Use a dummy subroutine.
Create a dummy subroutine or function that does nothing. Put calls to this subroutine in a few of the loops that are being parallelized. Recompile and execute. Use -loopinfo to see which loops are being parallelized.
Continue this process until you start getting the correct results.
Run loops backward serially.
Replace DO I=1,N with DO I=N,1,-1. Different results point to data dependencies.
Avoid using the loop index.
Replace:
DO I=1,N ... CALL SNUBBER(I) ... ENDDO |
With:
DO I1=1,N I=I1 ... CALL SNUBBER(I) ... ENDDO |
The following provide more information:
OpenMP API User’s Guide
Techniques for Optimizing Applications: High Performance Computing, by Rajat Garg and Ilya Sharapov, Sun Microsystems Press Blueprint, 2001.
High Performance Computing, by Kevin Dowd and Charles Severance, O’Reilly and Associates, 2nd Edition, 1998.
Parallel Programming in OpenMP, by Rohit Chandra et al., Morgan Kaufmann Publishers, 2001.
Parallel Programming, by Barry Wilkinson, Prentice Hall, 1999.