Sun Microsystems Documentation
Table of Contents
 
 

Selecting The Best Compiler Options

By Darryl Gove, Sun Microsystems, June, 2008

This article suggests how to get the best performance from an UltraSPARC or x86/EMT64 (x64) processor running on the latest Solaris systems by compiling with the best set of compiler options and the latest compilers. These are suggestions of things you should try, but before you release the final version of your program, you should understand exactly what you have asked the compiler to do.

The fundamental questions

There are two questions that you need to ask when compiling your program:

  1. What do I know about the platforms that this program will run on?

  2. What do I know about the assumptions that are made in the code?

The answers to these two questions determine what compiler options you should use.

The target platform

What platforms do you expect your code to run on? The choice of platform determines:

  1. 32-bit or 64-bit instruction set

  2. Instruction set extensions the compiler can use

  3. Instruction scheduling depending on instruction execution times

  4. Cache configuration

The first three are often the most important ones.

32-bit versus 64-bit code

The UltraSPARC and x64 families of processors can run both 32-bit and 64-bit code. The main advantage of 64-bit code is that the application can handle a larger data set than 32-bit code. However, the cost of this larger address space is a larger memory footprint for the application; long variable types and pointers increase in size from 4 bytes to 8 bytes. The increase in footprint will cause the 64-bit application to run more slowly than the 32-bit version.

However, the x86/x64 platform has some architectural advantages when running 64-bit code compared to running 32-bit code. In particular, the application can use more registers, and can use a better calling convention. These advantages will typically enable a 64-bit version of an application to run faster than a 32-bit version of the same code, unless the memory footprint of the application has significantly increased.

The UltraSPARC line of processors was architected to enable the 32-bit version of the application to already use the architectural features of the 64-bit instruction set. So there is no architectural performance gain going from 32-bit to 64-bit code. Consequently the UltraSPARC processors will only see the additional cost of the increase in memory footprint.

The compiler flags that determine whether a 32-bit or 64-bit binary is generated are the flags -m32 and -m64.

For additional details about migrating from 32-bit to 64-bit code, refer to Converting 32-bit Applications Into 64-bit Applications: Things to Consider and 64-bit x86 Migration, Debugging, and Tuning, With the Sun Studio 10 Toolset

Specifying an appropriate target processor

The default for the compiler is to produce a 'generic' binary; a binary that will work well on all platforms. In many situations this will be the best choice. However, there are some situations where it is appropriate to select a different target.

The -xtarget flag actually sets three flags:

Target architectures for SPARC processors

The default setting -xtarget=generic should be appropriate for most situations. This will generate a 32-bit binary that uses the SPARC V8 instruction set, or a 64-bit binary that uses the SPARC V9 instruction set. The most common situation where a different setting may be required is compiling a code containing significant floating point computations so that the resulting binary uses the floating point multiply-accumulate (FMA or FMAC) instructions.

The SPARC64 VI processors support FMA instructions. These instructions combine a floating point multiply and a floating point addition (or subtraction) into a single operation. A FMA typically takes the same number of cycles to complete as either a floating point addition or a floating point multiplication, so the performance gain from using these instructions can be significant. However, it is possible that the results from an application compiled to use FMA instructions may be different than the same application compiled not to use the instructions.

An FMAC instruction performs the following operation, the use of the word ROUND in the equation indicates that the value is rounded to the nearest representable floating point number when it is stored into the result.

Result = ROUND( (value1 * value2) + value3)

The single instruction replaces the following two instructions

tmp = ROUND(value1 * value2)
Result = ROUND(tmp + value3)

Notice that the two instruction version has two round operations, and it is this difference in the number of rounding operations that may result in a difference in the least significant bits of the calculated result. The FMA implemented on the SPARC64 VI processor is referred to as a fused FMA, it is possible to have an unfused FMA which implements the multiply accumulate operation in a single instruction, but produces a result which is identical to the one that would be produced by the two separate instructions.

To generate FMA instructions, the binary needs to be compiled with the flags:

-xarch=sparcfmaf -fma=fused

Alternatively the flags -xtarget=sparc64vi -fma=fused will enable the generation of the FMA instruction and will also tell the compiler to assume the characteristics of the SPARC64 VI processor when compiling the code. This will produce optimal code for the SPARC64 VI platform.

Specifying the target processor for the x64 processor family

By default the compiler targets a 32-bit generic x86 based processor, so the code will run on any x86 processor from a Pentium Pro up to an AMD Opteron architecture. Whilst this produces code that can run over the widest range of processors, this does not take advantage of the extensions offered by the latest processors. Most currently available x86 processors have the SSE2 instruction set extensions. To take advantage of these instructions the flag -xarch=sse2 can be used. However, the compiler may not recognise all opportunities to use these instructions unless the vectorisation flag -xvector=simd is also used.

Summary of target settings for various address spaces and architectures

The following table contains a list of settings to use for the various processor and architectures.

Address Space

SPARC

SPARC64

x86

x64/sse2

32-bit

-xtarget=generic -m32

-xtarget=sparc64vi -m32 -fma=fused

-xtarget=generic -m32

-xtarget=generic -xarch=sse2 -m32 -xvector=simd

64-bit

-xtarget=generic -m64

-xtarget=sparc64vi -m64 -fma=fused

-xtarget=generic -m64

-xtarget=generic -xarch=sse2 -m64 -xvector=simd

Optimization and debug

The optimization flags chosen alter three important characteristics; the runtime of the compiled application, the length of time that the compilation takes, and the amount of debug that is possible with the final binary. In general the higher the level of optimization the faster the application runs (and the longer it takes to compile), but the less debug information that is available; but the particular impact of optimization levels will vary from application to application.

The easiest way of thinking about this is to consider three degrees of optimization, as outlined in the following table.

Purpose

Flags

Comments

Full debug

[no optimization flags]   -g

The application will have full debug capabilities, but almost no optimization will be performed on the application, leading to lower performance.

Optimised

-g -O      [-g0 for C++]

The application will have good debug capabilities, and a reasonable set of optimizations will be performed on the application, typically leading to significantly better performance.

High optimization

-g -fast    [ -g0 for C++]

The application will have good debug capabilities, and a large set of optimizations will be performed on the application, typically leading to higher performance.

Note: For C++ the debug flag -g will inhibit some of the inlining of methods, the flag -g0 will provide debug information without inhibiting the inlining of these methods. Consequently it is recommend that for higher levels of optimization that -g0 be used instead of -g.

Suggestion: In general an optimization level of at least -O is suggested, however the two situations where lower levels might be considered are (i) where more detailed debug information is required and (ii) the semantics of the program require that variables are treated as volatile, in which case the optimization level should be lowered to -xO2.

More details on debug information

The compiler will generate information for the debugger if the -g flag is present. For lower levels of optimization, the -g flag disables some minor optimizations (to make the generated code easier to debug). At higher levels of optimization, the presence of the flag does not alter the code generated (or its performance) -- but be aware that at high levels of optimization it is not always possible for the debugger to relate the disassembled code to the exact line of source, or for it to determine the value of local variables held in registers rather than stored to memory.

As discussed earlier, the C++ compiler will disable some of the inlining performed by the compiler when the -g compiler flag is used, however the flag -g0 will tell the compiler to do all the inlining that it would normally do as well generating the debug information.

A very strong reason for compiling with the -g flag is that the Sun Studio Performance Analyzer can then attribute time spent in the code directly to lines of source code -- making the process of finding performance bottlenecks considerably easier.

Suggestion

Refer to Comparing the -fast Option Expansion on x86 Platforms and SPARC Platforms for the expansion of -fast by Sun Studio 10 C, C++, and Fortran compilers, cc, CC, and f95, respectively.

The implications for floating-point arithmetic when using the -fast option

One issue to be aware of is the inclusion of floating-point arithmetic simplifications in -fast. In particular, the options -fns and -fsimple=2 allow the compiler to do some optimizations that do not comply with the IEEE-754 floating-point arithmetic standard, and also allow the compiler to relax language standards regarding floating point expression reordering.

With the flag -fns, subnormal numbers (that is, very small numbers that are too small to be represented in normal form) are flushed to zero.

With -fsimple, the compiler can treat floating-point arithmetic as a mathematics textbook might express. For example, the order additions are performed doesn't matter, and it is safe to replace a divide operation by multiplication by the reciprocal. These kinds of transformations seem perfectly acceptable when performed on paper, but they can result in a loss of precision when algebra becomes real numerical computation with numbers of limited precision.

Also, -fsimple allows the compiler to make optimizations that assume that the data used in floating-point calculations will not be NaNs (Not a Number). Compiling with -fsimple is not recommended If you expect computation with NaNs .

Notes

Crossfile optimization

The -xipo option performs interprocedural optimizations over the whole program at link time. This means that the object files are examined again at link time to see if there are any further optimization opportunities. The most common opportunity is to inline one code from one file into code from another file. The term inlining means that the compiler replaces a call to a routine with the actual code from that routine.

Inlining is good for two reasons, the most obvious being that it eliminates the overhead of calling another routine. A second, less obvious reason is that inlining may expose additional optimizations that can now be performed on the object code. For example, imagine that a routine calculates the color of a particular point in an image by taking the x and y position of the point and calculating the location of the point in the block of memory containing the image (image_offset = y * row_length + x). By inlining that code in the routine that works over all the pixels in the image, the compiler is able generate code to just add one to the current offset to get to the next point instead of having to do a multiplication and an addition to calculate each address of each point, resulting in a performance gain.

The downside of using -xipo is that it can significantly increase the compile time of the application and may also increase the size of the executable.

Suggestion:

Profile feedback

When compiling a program, the compiler takes a best guess at how the flow of the program might go -- about which branches are taken and which branches are untaken. For floating-point intensive code, this generally gives good performance. But programs with many branching operations might not obtain the best performance.

Profile feedback assists the compiler in optimizing your application by giving it real information about the paths actually taken by your program. Knowing the critical routes through the code allows the compiler to make sure these are the optimized ones.

Profile feedback requires that you compile and execute a version of your application with -xprofile=collect and then run the application with representative input data to collect a runtime performance profile. You then recompile with -xprofile=use and the performance profile data collected. The downside of doing this is that the compile cycle can be significantly longer (you are doing two compiles and a run of your application), but the compiler can produce much more optimal execution paths, which means a faster runtime.

A representative data set should be one that will exercise the code in ways similar to the actual data that the application will see in production; the program can be run multiple times with different workloads to build up the representative data set. Of course if the representative data manages to exercise the code in ways which are not representative of the real workloads, then performance may not be optimal. However, it is often the case that the code is always executed through similar routes, and so regardless of whether the data is representative or not, the performance will improve. For more information on determining whether a workload is representative read my article Selecting Representative Training Workloads for Profile Feedback Through Coverage and Branch Analysis.

Suggestion:

Using large pages for data

If the program manipulates large data sets, then it may be the case that it would benefit from using large pages to hold the data. The idea of a 'page' is a region of contiguous physical memory; the processor deals in virtual memory, which allows the processor the freedom to move the data around in physical memory, or even store it to and load it from disk. Since the processor deals with virtual memory, it has to look up virtual addresses to find the physical location of that data in memory; in order to do this it uses the concept of pages. Every time the processor needs to access a different page in memory, it has to look up the physical location of that page. This takes a small amount of time, but if it happens often the time can become significant. The default size of these pages is 8KB for SPARC, 4KB for x86. However, the processor can use a range of page sizes. The advantage of using a large page size is that the processor will have to perform fewer lookups, but the disadvantage is that the processor may not be able to find a sufficiently large chunk of contiguous memory to allocate the large page on (in which case a set of smaller size pages will be allocated instead).

The compiler option which controls page size is -xpagesize=size. The options for the size depend on the platform. On UltraSPARC processors, typical sizes are 8K, 64K, 512K, or 4M. For example, changing the page size from 8K (the default) to 64K will reduce the number of look ups by a factor of 8. On the x86 platform, the default page size are 4K , and the actual sizes that are available depend on the processor. It is possible to detect performance issues from page sizes using either trapstat, if it is available, and if the processor traps into Solaris to handle TLB misses), or cpustat when the processor provides hardware performance counters that could TLB miss events.

Advanced compiler options: C/C++ pointer aliasing

There are two flags that you can use to make assertions about the use of pointers in your program. These flags will tell the compiler something that it can assume about the use of pointers in your source. It does not check to see if the assertion is ever violated, so if your code violates the assertion, then your program might not behave in the way you intended it to. Note that lint can help you do some validity checking of the code at a particular -xalias_level. (See Chapter 4, lint Source Code Checker, in Sun Studio 12: C User’s Guide.)

The two assertions are:

A useful piece of terminology is the expression 'alias'. Two pointers alias if they point to the same location in memory. The flags -xrestrict and -xalias_level tell the compiler what degree of aliasing to assume in the code. For the compiler, aliasing means that stores to the memory addressed by one pointer may change the memory addressed by the other pointer -- this means that the compiler has to be very careful never to reorder stores and loads in expressions containing pointers, and it may also have to reload the values of memory accessed through pointers after new data is stored into memory.

The following table summarizes the options for -xalias_level for C (cc).

cc -xalias_level=

Comment

any

Any pointers can alias (default)

basic

Basic types do not alias each other (for example, int* and float*)

weak

Structure pointers alias by offset. Structure members of the same type at the same offset (in bytes) from the structure pointer, may alias.

layout

Structure pointers alias by common fields. If the first few fields of two structure pointers have identical types, then they may potentially alias.

strict

Pointers to structures with different variable types in them do not alias

std

Pointers to differently named structures do not alias (so even if all the elements in the structures have the same types, if they have different names, then the structures do not alias).

strong

There are no pointers to the interiors of structures and char* is considered a basic type (at lower levels char* is considered as potentially aliasing with any other pointers)

The following table summarizes the options for -xalias_level for C++ (CC). 

CC -xalias_level=

Comment

any

Any pointers can alias (default)

simple

basic types do not alias (same as basic for C)

compatible

corresponds to layout for C

Notes

A set of flags to try

The final thing to do is to pull all these points together to make a suggestion for a good set of flags. Remember that this set of flags may not actually be appropriate for your application, but it is hoped that they will give you a good starting point. (Use of the flags in square brackets, [..] depends on special circumstances.)

Flags

Comment

-g

Generate debugging information (may use -g0 for C++)

-fast

Aggressive optimization

-xtarget=generic [-xtarget=sparc64vi -fma=fused] [-xarch=sse -xvector=simd]

Specify target platform

-xipo

Enable interprocedural optimization

-xprofile=[collect|use]

Compile with profile feedback

[-fsimple=0 -fns=no]

No floating-point arithmetic optimizations. Use if IEEE-754 compliance is important

[-xalias_level=val]

Set level of pointer aliasing (for C and C++). Use only if you know the option to be safe for your program.

[-xrestrict]

Uses restricted pointers (for C). Use only if you know the option to be safe for your program.

Final remarks

There are many other options that the compilers recognize. The ones presented here probably give the most noticeable performance gains for most programs and are relatively easy to use. When selecting the compiler options for your program:

For details on all these options, see the Sun Studio compiler user guides and man pages.

Further reading

About the Author

Darryl Gove is a senior staff engineer in Compiler Performance Engineering at Sun Microsystems Inc., analyzing and optimizing the performance of applications on current and future UltraSPARC systems. Darryl has an M.Sc. and Ph.D. in Operational Research from the University of Southampton in the UK. Before joining Sun, Darryl held various software architecture and development roles in the UK. He is the author of Solaris Application Programming and maintains a blog focused on developer issues.

Would you recommend this Sun site to a friend or colleague?
ContactAbout SunNewsEmploymentSite MapPrivacyTerms of UseTrademarksCopyright Sun Microsystems, Inc.