Once you have a correct, working OpenMP program, it is worth considering its overall performance. There are some general techniques that you can utilize to improve the efficiency and scalability of an OpenMP application, as well as techniques specific to the Sun platforms. These are discussed briefly here.
For additional information, see Techniques for Optimizing Applications: High Performance Computing, by Rajat Garg and Ilya Sharapov, which is available from http://www.sun.com/books/catalog/garg.xml
Also, visit the Sun Developer portal for occasional articles and case studies regarding performance analysis and optimization of OpenMP applications, at http://developers.sun.com/prodtech/cc/.
The following are some general techniques for improving performance of OpenMP applications.
Minimize synchronization.
Avoid or minimize the use of BARRIER, CRITICAL sections, ORDERED regions, and locks.
Use the NOWAIT clause where possible to eliminate redundant or unnecessary barriers. For example, there is always an implied barrier at the end of a parallel region. Adding NOWAIT to a final DO in the region eliminates one redundant barrier.
Use named CRITICAL sections for fine-grained locking.
Use explicit FLUSH with care. Flushes can cause data cache restores to memory, and subsequent data accesses may require reloads from memory, all of which decrease efficiency.
By default, idle threads will be put to sleep after a certain time out period. It could be that the default time out period is not sufficient for your application, causing the threads to go to sleep too soon or too late. The SUNW_MP_THR_IDLE environment variable can be used to override the default time out period, even up to the point where the idle threads will never be put to sleep and remain active all the time.
Parallelize at the highest level possible, such as outer DO/FOR loops. Enclose multiple loops in one parallel region. In general, make parallel regions as large as possible to reduce parallelization overhead. For example:
This construct is less efficient: !$OMP PARALLEL .... !$OMP DO .... !$OMP END DO .... !$OMP END PARALLEL !$OMP PARALLEL .... !$OMP DO .... !$OMP END DO .... !$OMP END PARALLEL than this one: !$OMP PARALLEL .... !$OMP DO .... !$OMP END DO ..... !$OMP DO .... !$OMP END DO !$OMP END PARALLEL |
Use PARALLEL DO/FOR instead of worksharing DO/FOR directives in parallel regions. The PARALLEL DO/FOR is implemented more efficiently than a general parallel region containing possibly several loops. For example:
This construct is less efficient: !$OMP PARALLEL !$OMP DO ..... !$OMP END DO !$OMP END PARALLEL than this one: !$OMP PARALLEL DO .... !$OMP END PARALLEL |
On Solaris systems, use SUNW_MP_PROCBIND to bind threads to processors. Processor binding, when used along with static scheduling, benefits applications that exhibit a certain data reuse pattern where data accessed by a thread in a parallel region will be in the local cache from a previous invocation of a parallel region. See 2.4 Processor Binding on Solaris.
Use MASTER instead of SINGLE wherever possible.
The MASTER directive is implemented as an IF-statement with no implicit BARRIER : IF(omp_get_thread_num() == 0) {...}
The SINGLE directive is implemented similar to other worksharing constructs. Keeping track of which thread reached SINGLE first adds additional runtime overhead. There is an implicit BARRIER if NOWAIT is not specified. It is less efficient.
Choose the appropriate loop scheduling.
STATIC causes no synchronization overhead and can maintain data locality when data fits in cache. However, STATIC may lead to load imbalance.
DYNAMIC,GUIDED incurs a synchronization overhead to keep track of which chunks have been assigned. And, while these schedules could lead to poor data locality, they can improve load balancing. Experiment with different chunk sizes.
Use LASTPRIVATE with care, as it has the potential of high overhead.
Data needs to be copied from private to shared storage upon return from the parallel construct.
The compiled code checks which thread executes the logically last iteration. This imposes extra work at the end of each chunk in a parallel DO/FOR. The overhead adds up if there are many chunks.
Use efficient thread-safe memory management.
Applications could be using malloc() and free() explicitly, or implicitly in the compiler-generated code for dynamic/allocatable arrays, vectorized intrinsics, and so on.
The thread-safe malloc() and free() in libc have a high synchronization overhead caused by internal locking. Faster versions can be found in the libmtmalloc library. Link with -lmtmalloc to use libmtmalloc.
Small data cases may cause OpenMP parallel loops to underperform. Use the IF clause on PARALLEL constructs to indicate that a loop should run parallel only in those cases where some performance gain can be expected.
When possible, merge loops. For example:
merge two loops
!$omp parallel do do i = ... |
statements_1
end do !$omp parallel do do i = ... |
statements_2
end do |
into a single loop
!$omp parallel do do i = ... |
statements_1
statements_2
end do |
Try nested parallelism if your application lacks scalability beyond a certain level. See 1.2 Special Conventions Used Here for more information about nested parallelism in OpenMP.
Careless use of shared memory structures with OpenMP applications can result in poor performance and limited scalability. Multiple processors updating adjacent shared data in memory can result in excessive traffic on the multiprocessor interconnect and, in effect, cause serialization of computations.
Most high performance processors, such as UltraSPARC processors, insert a cache buffer between slow memory and the high speed registers of the CPU. Accessing a memory location causes a slice of actual memory (a cache line) containing the memory location requested to be copied into the cache. Subsequent references to the same memory location or those around it can probably be satisfied out of the cache until the system determines it is necessary to maintain the coherency between cache and memory.
However, simultaneous updates of individual elements in the same cache line coming from different processors invalidates entire cache lines, even though these updates are logically independent of each other. Each update of an individual element of a cache line marks the line as invalid. Other processors accessing a different element in the same line see the line marked as invalid. They are forced to fetch a more recent copy of the line from memory or elsewhere, even though the element accessed has not been modified. This is because cache coherency is maintained on a cache-line basis, and not for individual elements. As a result there will be an increase in interconnect traffic and overhead. Also, while the cache-line update is in progress, access to the elements in the line is inhibited.
This situation is called false sharing. If this occurs frequently, performance and scalability of an OpenMP application will suffer significantly.
False sharing degrades performance when all of the following conditions occur.
Shared data is modified by multiple processors.
Multiple processors update data within the same cache line.
This updating occurs very frequently (for example, in a tight loop).
Note that shared data that is read-only in a loop does not lead to false sharing.
Careful analysis of those parallel loops that play a major part in the execution of an application can reveal performance scalability problems caused by false sharing. In general, false sharing can be reduced by
making use of private data as much as possible;
utilizing the compiler’s optimization features to eliminate memory loads and stores.
In specific cases, the impact of false sharing may be less visible when dealing with larger problem sizes, as there might be less sharing.
Techniques for tackling false sharing are very much dependent on the particular application. In some cases, a change in the way the data is allocated can reduce false sharing. In other cases, changing the mapping of iterations to threads, giving each thread more work per chunk (by changing the chunksize value) can also lead to a reduction in false sharing.
Starting with the Solaris 9 release, the operating system provides scalability and high performance for the SunFireTM systems. New features introduced with Solaris 9 OS that improve the performance of OpenMP programs without hardware upgrades are Memory Placement Optimizations (MPO) and Multiple Page Size Support (MPSS), among others.
MPO allows the OS to allocate pages close to the processors that access those pages. SunFire E20K, and SunFire E25K systems have different memory latencies within the same UniBoardTM versus between different UniBoards. The default MPO policy, called first-touch, allocates memory on the UniBoard containing the processor that first touches the memory. The first-touch policy can greatly improve the performance of applications where data accesses are made mostly to the memory local to each processor with first-touch placement. Compared to a random memory placement policy where the memory is evenly distributed throughout the system, the memory latencies for applications can be lowered and the bandwidth increased, leading to higher performance.
The MPSS feature is supported as of the Solaris 9 OS release, and allows a program to use different page sizes for different regions of virtual memory. The default Solaris page size is relatively small (8KB on UltraSPARC processors and 4KB on AMD64 Opteron processors). Applications that suffer from too many TLB misses may experience a performance boost by using a larger page size.
TLB misses can be measured using the Sun Performance Analyzer.
The default page size on a specific platform can be obtained with the Solaris OS command: /usr/bin/pagesize . The -a option on this command lists all the supported page sizes. (See the pagesize(1) man page for details.)
There are three ways to change the default page size for an application:
Use the Solaris OS command ppgsz(1)
Compile the application with the -xpagesize, -xpagesize_heap, and -xpagesize_stack options. (See the compiler man pages for details.)
Use MPSS specific environment variables. See the mpss.so.1(1) man page for details.