Compiling very large routines (thousands of lines of code in a single procedure) at optimization level -O3 or higher may require additional memory that could degrade compile-time performance. You can control this by limiting the amount of virtual memory available to a single process.
In a sh shell, use the ulimit command. See sh(1).
Example: Limit virtual memory to 16 Mbytes:
demo$ ulimit -d 16000 |
In a csh shell, use the limit command. See csh(1).
Example: Limit virtual memory to 16 Mbytes:
demo% limit datasize 16M |
Each of these command lines causes the optimizer to try to recover at 16 Mbytes of data space.
This limit cannot be greater than the system’s total available swap space and, in practice, must be small enough to permit normal use of the system while a large compilation is in progress. Be sure that no compilation consumes more than half the space.
Example: With 32 Mbytes of swap space, use the following commands:
In a sh shell:
demo$ ulimit -d 1600 |
In a csh shell:
demo% limit datasize 16M |
The best setting depends on the degree of optimization requested and the amount of real and virtual memory available.
In 64-bit Solaris environments, the soft limit for the size of an application data segment is 2 Gbytes. If your application needs to allocate more space, use the shell’s limit or ulimit command to remove the limit.
For csh use:
demo% limit datasize unlimited |
For sh or ksh, use:
demo$ ulimit -d unlimited |
See the Solaris 64-bit Developer’s Guide for more information.