The diagram in Figure 1–1 shows a
typical performance versus problem size curve for an application running on
a machine with a large amount of physical memory installed. For very small
problem sizes, the entire program can fit in the data cache (D$
)
or the external cache (E$
) — but eventually, the data area
of the program becomes large enough that the program fills the entire 4 gigabyte
virtual address space available to a 32–bit application.
Beyond the 32–bit virtual address limit, applications programmers can still handle large problem sizes — usually by splitting the application data set between primary memory and secondary memory, for example, onto a disk. Unfortunately, transferring data to and from a disk drive takes a longer time, in orders of magnitude, than memory-to-memory transfers.
Today, many servers can handle more than 4 gigabytes of physical memory. High-end desktop machines are following the same trend, but no single 32–bit program can directly address more than 4 gigabytes at a time. However, a 64–bit application can use the 64-bit virtual address space capability to allow up to 18 exabytes (1 exabyte is approximately 1018 bytes) to be directly addressed; thus, larger problems can be handled directly in primary memory. If the application is multithreaded and scalable, then more processors can be added to the system to speed up the application even further. Such applications become limited only by the amount of physical memory in the machine.
It might seem obvious, but for a broad class of applications, the ability to handle larger problems directly in primary memory is the major performance benefit of 64-bit machines.
A greater proportion of a database can live in primary memory.
Larger CAD/CAE models and simulations can fit in primary memory.
Larger scientific computing problems can fit in primary memory.
Web caches can hold more in memory, reducing latency.