C H A P T E R  3

Profiler

This chapter discusses the Netra DPS profiler used in the Netra Data Plane software. Topics include:


Profiler Introduction

The Netra DPS profiler is a set of API calls that help you collect various critical data during the execution of an application. You can profile one or more areas of your application such as CPU utilization, I/O wait times, and so on. Information gathered using the profiler helps you decide where to direct performance-tuning efforts. The profiler uses special counters and resources available in the system hardware to collect critical information about the application.

As with instrumentation-based profiling, there is a slight overhead for collecting data during the application run. The profiler uses as little overhead as possible so that the presented data is very close to the actual application run without the profiler API in place.


How the Profiler Works

You enable the profiler with the -pg command-line option (tejacc). You can insert the API calls at desired places to start collecting profiling data. The profiler configures and sets the hardware resources to capture the requested data. At the same time, the profiler reserves and sets up the memory buffer where the data will be stored. You can insert calls to update the profiler data at any further location in the application. With this setup, the profiler reads the current values of the data and stores the values in memory.

There is an option to store additional user data in the memory along with each update capture. Storing this data helps you analyze the application in the context of different application-specific data.

You can also obtain the current profiler data in the application and use the data as desired. With the assistance of other communication mechanisms you can send the data to the host or other parts of the application.

By demarking the portions that are being profiled, you can dump the collected data to the console. The data is presented as a comma-delimited table that can be further processed for report generation.

To minimize the amount of memory space needed for the profile capture, the profiler uses a circular buffer mechanism to store the data. In a circular buffer, the start and the end data is preserved, yet the intermediate data is overwritten when the buffer becomes full.


Groups and Events

The profiling data is captured into different groups based on the significance of the data. For example, with the CPU performance group, events such as completed instruction cycles, data cache misses, and secondary cache misses are captured. In the memory performance group, events such as memory queue and memory cycles are captured. Refer to the Profiler API chapter of the Netra Data Plane Software Suite 2.0 Reference Manual for the different groups and different events that are captured and measured on the target.


Profiler Output

The profiler output consists of one line per profiler record. Each line commonly has a format of nine comma-delimited fields. The fields contain values in hexadecimal. If a record is prefixed with a -1, the buffer allocated for the profiler records has overrun. When a buffer overrun occurs, you should increase the value of the profiler_buffer_size property as described in the Configuration API chapter of the Netra Data Plane Software Suite 2.0 Reference Manual, and run the application again.

TABLE 3-1 describes the fields of the profiler record:


TABLE 3-1 Profiler Record Fields

Field

Description

CPU ID

The number representing the CPU ID where the current profiler call was made.

Caller ID

The number representing the source location of the teja_profiler call. The records/profiler_call_locations.txt file lists all of the IDs and their corresponding source locations.

Call Type

The type of teja_profiler call. The values listed are defined in the teja_profiler.h file.

Completed Cycles

The running total of completed clock cycles so far. You can use this value to calculate the time between two entries.

Program Counter

The value of the program counter when the current profiler call was invoked.

Group Type

The group number of the events started or being measured.

Event Values

The value of the events. This value can be one or more columns depending on the target CSP. The target-dependent values are described in the Profiler API chapter in the Netra Data Plane Software Suite 2.0 Reference Manual. The order of the events are the same as the location of the bit set in the event bit mask, passed to teja_profiler_start, starting from left to right. For the entry that represents teja_profiler_start, the values represent the event types.

There are two events per record (group) in the dump output:

  • event_hi - represents the higher bit set in the event mask
  • event_lo - represents the lower bit set in the event mask

Overflow values consist of the following:

  • 0x0 - no overflow
  • 0x1 - overflow of the event_lo
  • 0x2 - overflow of the event_hi
  • 0x3 - overflow of both event_hi and event_lo

Overflow

The overflow information of one or more events being measured. The value is target-dependent.

User Data

The values of the user-defined data. Zero or more columns, depending on the number of counters allocated and recorded by the user.


Refer to Profiler Output Example for an example of dump output.


Profiler Examples

For profiler API function descriptions, refer to the Netra Data Plane Software Suite 2.0 Reference Manual.

Profiler API

CODE EXAMPLE 3-1 provides an example of profiler API usage.


CODE EXAMPLE 3-1 Sample Profiler API Usage
main() 
{ 
  /* ...user code... */ 
 teja_profiler_start(TEJA_PROFILER_CMT_CPU, TEJA_PROFILER_CMT_CPU_IC_MISS); 
  /*   ...user code... */ 
  while (packet) { 
    /*  ...user code... */ 
    teja_profiler_update(TEJA_PROFILER_CMT_CPU, num_pkt); 
    if (num_pkt == 100) 
      teja_profiler_dump(generator_thread); teja_profiler_stop(TEJA_PROFILER_CMT_CPU);
  } 
}

Profiler Configuration

You can change the profiler configuration in the software architecture. The following example shows the profiler properties that you can change per process.


teja_process_set_property(main_process, “profiler_log_table_size”,"4096");

main_process is the process object that was created using the teja_process_create call. The property values are applied to all threads mapped to the process specified using main_process.

Profiler Output Example

The following is an example of the profiler output.


TEJA_PROFILE_DUMP_START,ver2.0 
CPUID,ID,Type,Cycles,PC,Grp,Evt_Hi,Evt_Lo,Overflow,User Data 
0,2be4,1,29371aa3d0,51171c,1,100,4 
0,2bf6,1,294bbbd464,51189c,2,2,1 
0,2c0c,1,29629416a0,511a08,4,2,1 
0,2c22,1,29761be17c,511b7c,8,2,1 
0,2c38,1,2988fbbf60,511ce8,10,2,1 
0,2c4e,1,299c3ca170,511e5c,20,2,1 
0,30e6,2,2d20448f60,512904,1,36c2ba96,ce,0,0,114ee88 
0,30fe,2,2d37b98aec,512acc,2,9,9,0,0 
TEJA_PROFILE_DUMP_END 

The string, ver2.0, is the profiler dump format version. The string is used as an identifier of the output format. The string helps scripts written to process the output validate the format before processing further.

In the first record, call type 1 represents teja_profiler_start. The values 100 and 4 seen in the event_hi and event_lo columns are the types of events in group 1 being measured. In the record with ID 30e6, call type 2 represents teja_profiler_update, so the values 36c2ba96 and ce are the values of the event types 100 and 1 respectively.

Cycle counts are in increasing order so the difference between two of them provides the exact number of cycle counts between two profiler API calls. The difference divided by the processor frequency calculates the actual time between two calls.

IDs 2be4 and 2bf6 represent the source location of the profiler API call. The records/profiler_call_locations.txt file lists a table that maps IDs and actual source locations.


Profiling Application Performance

Profiling consists of instrumenting your application to extract performance information that can be used to analyze, diagnose, and tune your application.
Netra DPS provides an interface to assist you to obtain this information from your application. In general, profiling information consists of hardware performance counters and a few user-defined counters. This section defines the profiling information and how to obtain it.

Profiling is a disruptive activity that can have a significant performance effect. Take care to minimize profiling code and also to measure the effects of the profiling code. This can be done by measuring performance with and without the profiling code. One of the most disruptive parts of profiling is printing the profiling data to the console. To reduce the effects of prints, try to aggregate profiling statistics for many periods before printing, and print only in a designated strand.

The hardware counters for the CPU, DRAM controllers, and JBus are described in TABLE 3-2, TABLE 3-3, and TABLE 3-4 respectively.


TABLE 3-2 Sun UltraSPARC T1 CPU Counters

Event Name

Description

instr_cnt

Number of completed instructions. Annulled, mispredicted, or trapped instructions are not counted.[1]

SB_full

Number of store buffer full cycles.[2]

FP_instr_cnt

Number of completed floating-point instructions. [3] Annulled or trapped instruction are not counted.

IC_miss

Number of instruction cache (L1) misses.

DC_miss

Number of data cache (L1) misses for loads (store misses are not included because the cache is write-through nonallocating).

ITLB_miss

Number of instruction TLB miss trap taken (includes real_translation misses).

DTLB_miss

Number of data TLB miss trap taken (includes real_translation misses).

L2_imiss

Number of secondary cache (L2) misses due to instruction cache requests.

L2_dmiss_Id

Number of secondary cache (L2) misses due to data cache load requests.[4]



TABLE 3-3 DRAM Performance Counters

Counter Name

Description

mem_reads

Number of read transactions.

mem_writes

Number of write transactions.

bank_busy_stalls

Number of bank busy stalls (when transactions are pending).

rd_queue_latency

Read queue latency (incremented by number of read transactions in the queue each cycle).

wr_queue_latency

Write queue latency (incremented by number of write transactions in the queue each cycle).

rw_queue_latency

Read + write queue latency (incremented by number of write transactions in the queue each cycle).

wr_buf_hits

Writeback buffer hits (incremented by 1 each time a read is deferred due to conflicts with pending writes).



TABLE 3-4 JBus Performance Counters

Counter Name

Description

jbus_cycles

JBus cycles (time).

dma_reads

DMA read transactions (inbound).

dma_read_latency

Total DMA read latency.

dma_writes

DMA write transactions.

dma_write8

DMA WR8 sub transactions.

ordering_waits

Ordering waits (JBI to L2 queues blocked each cycle).

pio_reads

PIO read transactions (outbound).

pio_read_latency

Total PIO read latency.

pio_writes

PIO write transactions.

aok_dok_off_cycles

AOK or DOK off cycles seen.

aok_off_cycles

AOK_OFF cycles seen.

dok_off_cycles

DOK_OFF cycles seen.


Each strand has its own set of CPU counters that only tracks its own events and can only be accessed by that strand. Only two CPU counters are 32 bits wide each. To prevent overflows, the measurement period should not exceed 6 seconds. In general, keep the measurement period between 1 and 5 seconds. When taking measurements, ensure that the application behavior is in a steady state. To check this behavior, measure the event a few times to see that it does not vary by more than a few percent between measurements. To measure all nine CPU counters, eight measurements are required. The application’s behavior should be consistent over the entire collection period. To profile each strand on a 32-thread application, each thread must have code to read and set the counters. Sample code is provided in CODE EXAMPLE 3-1. You must compile your own aggregate statistics across multiple strands or a core.

The JBus and DRAM controller counters are less useful. Since these resources are shared across all strands, only one thread should gather these counters.

The key user-defined statistic is the count of packets processed by the thread. Another statistic that can be important is a measure of idle time, which is the number of times the thread polled for a packet and did not find any packets to process.

The following example shows how to measure idle time. Assume that the workload looks like the following:


while(1)
   If( work_to_do ) {
	Do work
	Increment work_count
   } else {
	Increment idle_loop_count
   }
}

User-defined counters count the number of times through the loop where no work was done. Measure the time of the idle loop by running idle loop alone (idle_loop_time). Then run real workload, counting the number of idle loops (idle_loop_count)


Idle_time = idle_loop_count * idle_loop_time


Profiling Metrics

You can calculate the following metrics after collecting the appropriate hardware counter data using the Netra DPS profiling infrastructure. Use the metrics to quantify performance effects and help in optimizing the application performance.

Calculate this metric by dividing instruction count by the total number of ticks during a time period when the thread is in a stable state. You can also calculate the IPC for a specific section of code. The highest number possible is 1 IPC, which is the maximum throughput of 1 core of the UltraSPARC T1 processor.

This metric is the inverse of IPC. This metric is useful for estimating the effect of various stalls in the CPU.

Multiplying this number with the L1 cache miss latency helps estimate the cost, in cycles, of instruction cache misses. Compare this number to the overall CPI to see if this is the cause of a performance bottleneck.

This metric indicates the number of instructions that miss in the L2 cache, and enables you to calculate the contribution of instruction misses to overall CPI.

Data cache miss rate in combination with the L2 cache miss rate quantifies the effect of memory accesses. Multiplying this metric with data cache miss latency provides an indication of its effect (contribution) on CPI.

Similar to data cache miss rate, this metric has higher cost in terms of cycles of contribution to overall CPI. This metric also enables you to estimate the memory bandwidth requirements.


Using the Profiler Script

The profiler script is used to summarize the profiling output generated from the profiler. The profiler script (written in perl) converts the raw profiler output to a summarized format that is easy to read and interpret.


Profiler Scripts

Two scripts are available, profiler.pl and profiler_n2.pl. profiler.pl is used for parsing outputs generated from a Sun UltraSPARC T1 (CMT1) processor. profile_n2.pl is used for parsing outputs generated from a Sun UltraSPARC T2 (CMT2) processor.

Usage

For Sun UltraSPARC T1 platforms (such as a Sun Fire T2000 system):


profiler.pl input_file > output_file

For Sun UltraSPARC T2 platforms (such as a Sun SPARC Enterprise T5220 system):


profiler_n2.pl input_file > output_file

input_file

This file consists of raw profile data generated by the Netra DPS profiler. Typically, this data is captured on the console and saved into a file with .csv suffix, indicating that this is a CSV (comma-separated values) file. For example, input_file.csv

output_file

This file is generated by redirecting the outputs of the profiler.pl script to an output file. This file should also be in CSV format. For example, output_file.csv.



Note - If there is no redirection (that is, the output_file is not specified), the output of the script will display on the console.


Raw Profile Data

Raw profile data is the direct output from the profiler.

The following shows an example of the raw profile data output from a Sun UltraSPARC T1 processor:


TEJA_PROFILE_DUMP_START,ver1.1
CPUID,ID,Type,Cycles,PC,Grp,Evt_Hi,Evt_Lo,Overflow,User Data
4,18236,1,4cf2eb9ce4,521f08,1,100,1
4,3a2f,2,4d048acb40,5128f0,1,31cffa4,c2a,0,1b7740,3da594c
4,18236,1,4d048ad5c4,521f08,1,100,2
4,3a2f,2,4d162a0db0,5128f0,1,31d274e,0,0,1e8480,3da594c
4,18236,1,4d162a1888,521f08,1,100,4
4,3a2f,2,4d27c951cc,5128f0,1,31d2e36,50e,0,2191c0,3da594c
4,18236,1,4d27c95c28,521f08,1,100,8
4,3a2f,2,4d396893a0,5128f0,1,31d238f,25b863,0,249f00,3da594c
4,18236,1,4d39689dd8,521f08,1,100,10
4,3a2f,2,4d4b07cca0,5128f0,1,31cf8de,0,0,27ac40,3da594c
4,18236,1,4d4b07d708,521f08,1,100,20
4,3a2f,2,4d5ca70e88,5128f0,1,31d183c,0,0,2ab980,3da594c
4,18236,1,4d5ca7194c,521f08,1,100,40
4,3a2f,2,4d6e4654ac,5128f0,1,31d2bd3,1b2,0,2dc6c0,3da594c
4,18236,1,4d6e465ef4,521f08,1,100,80
TEJA_PROFILE_DUMP_END

The following shows an example of the raw profile data output from the Sun UltraSPARC T2 processor:


TEJA_PROFILE_DUMP_START,ver1.1
CPUID,ID,Type,Cycles,PC,Grp,Evt_Hi,Evt_Lo,Overflow,User Data
2,315,1,d8a403c78c,52cf10,1,12,12
2,21c9,2,d8a403e3b1,514fe8,1,e,e,0,927c0,1d905b
2,4cd,1,d8a403eca2,52cf10,1,22,22
2,21c9,2,d8b8cd3be2,514fe8,1,5e89cc,5e89cc,0,30d40,0
2,4cd,1,d8b8cd3fee,52cf10,1,42,42
2,21c9,2,d8cd9812d0,514fe8,1,0,0,0,30d40,0
2,4cd,1,d8cd98178a,52cf10,1,82,82
2,21c9,2,d8e2636b16,514fe8,1,db21ac,db21ac,0,30d40,0
2,4cd,1,d8e2636f18,52cf10,1,102,102
2,21c9,2,d8f72f1c5c,514fe8,1,46042d,46042d,0,30d40,0
2,4cd,1,d8f72f2058,52cf10,1,202,202
2,21c9,2,d90bfa2d22,514fe8,1,0,0,0,30d40,0
2,4cd,1,d90bfa3181,52cf10,1,402,402
2,21c9,2,d920c5ce6c,514fe8,1,24ea141,24ea141,0,30d40,0
2,4cd,1,d920c5d301,52cf10,1,802,802
2,21c9,2,d93590ffc6,514fe8,1,8fb2c,8fb2c,0,30d40,0
2,4cd,1,d9359103dc,52cf10,1,fd2,fd2
2,21c9,2,d94a5cf7e3,514fe8,1,3f5f51c,3f5f51c,0,30d40,0
2,4cd,1,d94a5cfc19,52cf10,1,13,13
2,21c9,2,d95f283398,514fe8,1,0,0,0,30d40,0
2,4cd,1,d95f28379f,52cf10,1,23,23
2,21c9,2,d973f413a1,514fe8,1,2734a8,2734a8,0,30d40,0
2,4cd,1,d973f417ba,52cf10,1,103,103
2,21c9,2,d988bfbbca,514fe8,1,0,0,0,30d40,0
2,4cd,1,d988bfbfe1,52cf10,1,203,203
2,21c9,2,d99d8be47f,514fe8,1,61aa,61aa,0,30d40,0
2,4cd,1,d99d8be94f,52cf10,1,44,44
2,21c9,2,d9b257ba5a,514fe8,1,0,0,0,30d40,0
2,4cd,1,d9b257be48,52cf10,1,84,84
2,21c9,2,d9c7237ebc,514fe8,1,0,0,0,30d40,0
2,4cd,1,d9c72382f0,52cf10,1,104,104
2,21c9,2,d9dbee7725,514fe8,1,0,0,0,30d40,0
2,4cd,1,d9dbee7b2f,52cf10,1,204,204
2,21c9,2,d9f0b99d84,514fe8,1,0,0,0,30d40,0
2,4cd,1,d9f0b9a1c5,52cf10,1,15,15
2,21c9,2,da05853c14,514fe8,1,0,0,0,30d40,0
2,4cd,1,da05854024,52cf10,1,25,25
2,21c9,2,da1a5067bf,514fe8,1,0,0,0,30d40,0

 


2,4cd,1,da1a506bdd,52cf10,1,45,45
2,21c9,2,da2f1c54fd,514fe8,1,300388,300388,0,30d40,0
2,4cd,1,da2f1c5948,52cf10,1,85,85
2,21c9,2,da43e87245,514fe8,1,0,0,0,30d40,0
2,4cd,1,da43e876d0,52cf10,1,105,105
2,21c9,2,da58b3416a,514fe8,1,3d0910,3d0910,0,30d40,0
2,4cd,1,da58b3457e,52cf10,1,205,205
2,21c9,2,da6d7e5a3b,514fe8,1,0,0,0,30d40,0
2,4cd,1,da6d7e5e5d,52cf10,1,16,16
2,21c9,2,da824aa191,514fe8,1,0,0,0,30d40,0
2,4cd,1,da824aa5e5,52cf10,1,26,26
2,21c9,2,da9715c92e,514fe8,1,0,0,0,30d40,0
2,4cd,1,da9715cd85,52cf10,1,46,46
2,21c9,2,daabe167f2,514fe8,1,0,0,0,30d40,0
2,4cd,1,daabe16c18,52cf10,1,86,86
2,21c9,2,dac0ad6c8d,514fe8,1,0,0,0,30d40,0
2,4cd,1,dac0ad7142,52cf10,1,106,106
2,21c9,2,dad5792613,514fe8,1,0,0,0,30d40,0
2,4cd,1,dad5792a2b,52cf10,1,206,206
2,21c9,2,daea449364,514fe8,1,0,0,0,30d40,0
2,4cd,1,daea44979f,52cf10,1,17,17
2,21c9,2,daff0f72f4,514fe8,1,0,0,0,30d40,0
2,4cd,1,daff0f76fd,52cf10,1,27,27
2,21c9,2,db13db2e84,514fe8,1,0,0,0,30d40,0
2,4cd,1,db13db32cc,52cf10,1,47,47
2,21c9,2,db28a68860,514fe8,1,0,0,0,30d40,0
2,4cd,1,db28a68c8d,52cf10,1,87,87
2,21c9,2,db3d7120a0,514fe8,1,0,0,0,30d40,0
2,4cd,1,db3d7125a6,52cf10,1,107,107
2,21c9,2,db523c58b1,514fe8,1,0,0,0,30d40,0
2,4cd,1,db523c5cdf,52cf10,1,207,207
2,21c9,2,db6707bf3f,514fe8,1,0,0,0,30d40,0
2,4cd,1,db6707c3ea,52cf10,1,4b,4b
2,21c9,2,db7bd4202d,514fe8,1,0,0,0,30d40,0
2,4cd,1,db7bd42494,52cf10,1,8b,8b
2,21c9,2,db909fb827,514fe8,1,0,0,0,30d40,0
2,4cd,1,db909fbc6c,52cf10,1,cb,cb
2,21c9,2,dba56a6332,514fe8,1,0,0,0,30d40,0
2,4cd,1,dba56a67dd,52cf10,1,12,12
TEJA_PROFILE_DUMP_END

Summarized Profile Data

Summarized profile data is the processed data generated from the profiler.pl and the profile_n2.pl for the Sun UltraSPARC T1 (CMT1) and (Sun UltraSPARC T2 (CMT2) processors, respectively.

Sun UltraSPARC T1 Processor Profiler Output

For the Sun UltraSPARC T1 processor, the summary displays as in the following example:


cpuid , cycle ,  SB_full ,ITLB_miss ,Instr_cnt ,FP_instr_cnt ,DTLB_miss ,IC_miss ,L2_Imiss ,DC_miss ,L2_Dmiss_LD ,userdata1 ,userdata2 ,
4 , 289219777 ,3121, 0, 51104522, 0, 0, 1080, 433, 2471858, 236191, 2600000  ,64641356 ,
CPU,StartPC,UpdatePC,Cycles,Instr_cnt,CntrName,Value,UserData.1,UserData.2,
4,0x521f08,0x5128f0,295649212,52240523,FP_instr_cnt,0,400000,64641356,
4,0x521f08,0x5128f0,147824128,26122620,IC_miss,689,600000,64641356,
4,0x521f08,0x5128f0,295647284,52238312,DC_miss,2472263,800000,64641356,
4,0x521f08,0x5128f0,295646420,52234078,ITLB_miss,0,1000000,64641356,
4,0x521f08,0x5128f0,295644896,52241803,DTLB_miss,0,1200000,64641356,
4,0x521f08,0x5128f0,295649084,52246157,L2_Imiss,434,1400000,64641356,
4,0x521f08,0x5128f0,295646316,52250156,L2_Dmiss_LD,236270,1600000,64641356,
4,0x521f08,0x5128f0,295644764,52232100,SB_full,3114,1800000,64641356,

TABLE 3-5 describes each field in the top section of the summarized Sun UltraSPARC T1 profile data output:


TABLE 3-5 Sun UltraSPARC T1 Profile Data Output Field Descriptions

Field

Description

cpuid

CPU ID found in the first column of the raw profile data. Note: If profiling is done for multiple strands, then multiple rows of summarized data (with different CPU IDs) are shown in the top section.

cycle

Average number of clock cycles elapsed per profiling interval.

SB_full

Average number of SB_full occurrences per profiling interval.

ITLB_miss

Average number of ITLB_miss occurrences per profiling interval.

Instr_cnt

Average number of instructions executed per profiling interval.

FP_instr_cnt

Average number of floating point instructions executed per profiling interval.

DTLB_miss

Average number of DTLB_miss occurrences per profiling interval.

IC_miss

Average number of IC_miss occurrences per profiling interval.

L2_Imiss

Average number of L2_Imiss occurrences per profiling interval.

DC_miss

Average number of DC_miss occurrences per profiling interval.

L2_Dmiss_LD

Average number of L2_Dmiss_LD occurrences per profiling interval.

UserData.1

Average number taken from the User Defined Data1 column.

UserData.2

Average number taken from the User Defined Data2 column.


Sun UltraSPARC T2 Processor Profiler Output

For the Sun UltraSPARC T2 processor, the summary displays as in the following example:


CPUid, cycles, Store_instr,  L2_instr_misses,  
ITLB_miss_L2,  CPU_ST_to_PCX,  MA_OP,  MA_Busy,  
Completed_branches,  Icache_misses,  Stream_LD_to_PCX,  DES_3DES_OP,  
DES_3DES_Busy_cycle,  Sethi_instr,  L2_load_misses,  DTLB_miss_L2,  
MMU_LD_to_PCX,  CRC_TCPIP_Cksum_OP,  CRC_MPA_Cksum,  Taken_branches,  
Dcache_misses,  Stream_ST_to_PCX,  AES_OP,  AES_Busy_cycle,  
Other_instr,  FGU_arithmatic_instr,  ITLB_ref_L2,  CPU_LD_to_PCX,  
RC4_OP,  RC4_Busy_cycle,  ITLB_miss,  Atomics,  
Load_instr,  DTLB_ref_L2,  CPU_Ifetch_to_PCX,  MD5_SHA1_SHA256_OP,  
MD5_SHA1_SHA256_Busy_cycle,  DTLB_miss,  TLB_miss,  All_instr,  
Userdata.1,Userdata.2,
 
17, 347989526,  3185726,  78,  
0,  3000015,  0,  0,  
5023983,  113,  0,  0,  
0,  0,  216952,  0,  
0,  0,  0,  6393524,  
2050737,  0,  0,  0,  
48603479,  0,  0,  2636500,  
0,  0,  0,  184283,  
13328505,  0,  150,  0,  
0,  0,  0,  74964356,  
210256,  1032899,  
 
17
347989526
3185726
78
0
3000015
0
0
5023983
113
0
0
0
0
216952
0
0
0
0

 


6393524
2050737
0
0
0
48603479
0
0
2636500
0
0
0
184283
13328505
0
150
0
0
0
0
74964356
210256
1032899



Note - The data in the second and third sections of the Sun UltraSPARC T2 summary are identical. The format of the first section is the field header. The format in the second section matches the layout of the field header. The format in the third section is in one single column. This layout enables you to easily transfer data to a spreadsheet file column.


TABLE 3-6 describes each field in the top section of the summarized Sun UltraSPARC T2 profile data output:


TABLE 3-6 Sun UltraSPARC T2 Profile Data Output Field Descriptions

Field

Description

CPUid

CPU ID found in the first column of the raw profile data. Note: If profiling is done for multiple strands, then multiple rows of summarized data (with different CPU IDs) are shown in the top section.

cycles

Average number of clock cycles elapsed per profiling interval.

Store_instr

Number of Store Instructions executed per profiling interval.

L2_instr_misses

Number of L2 cache instruction misses per profiling interval.

ITLB_miss_L2

Average number of ITLB_miss occurrences per profiling interval.

CPU_ST_to_PCX

Number of CPU stores to PCX (Processor to Cache) per profiling interval.

MA_OP

Number of MA operations executed per profiling interval.

MA_Busy

Number of busy encountered per profiling interval when attempted to execute the MA operation.

Completed_branches

Number of completed branches per profiling interval.

Icache_misses

Number of Instruction Cache misses per profiling interval.

Stream_LD_to_PCX

Number of Loads to PCX (Processor to Cache) per profiling interval

DES_3DES_OP

Number of DES_3DES operations per profiling interval.

DES_3DES_Busy_cycle

Number of busy cycles encountered per profiling interval when attempted to execute the DES_3DES operations.

Sethi_instr

Number of Sethi instructions executed per profiling interval.

L2_load_misses

Number of L2 cache load misses per profiling interval.

DTLB_miss_L2

Number of Data TLB misses for L2 cache per profiling interval.

MMU_LD_to_PCX

Number of MMU load to PCX (Processor to Cache) per profiling interval.

CRC_TCPIP_Cksum_OP

Number of CRC, TCPIP_Cksum operations per profiling interval.

CRC_MPA_Cksum

Number of CRC and MPA checksum operations per profiling interval.

Taken_branches

Number of branches taken per profiling interval.

Dcache_misses

Number of L1 Data Cache misses per profiling interval.

Stream_ST_to_PCX

Number of stream store operations to PCX (Processor to Cache) per profiling interval.

AES_OP

Number of AES operations per profiling interval.

AES_Busy_cycle

Number of busy cycles encountered per profiling interval when attempted to execute the AES operation.

Other_instr

Number of all other instructions executed per profiling interval.

FGU_arithmatic_instr

Number of Floating-point arithmetic instructions executed per profiling interval.

ITLB_ref_L2

Number of Instruction TLB referenced for L2 cache per profiling interval.

CPU_LD_to_PCX

Number of CPU load to PCX (Processor to Cache) per profiling interval.

RC4_OP

Number of RC4 operations executed per profiling interval.

RC4_Busy_cycle

Number of busy cycles encountered per profiling interval when attempted to execute the RC4 operation.

ITLB_miss

Number of Instruction TLB misses (for L1 cache) per profiling interval.

Atomics

Number of atomic operations executed per profiling interval.

Load_instr

Number of Load instructions executed per profiling interval.

DTLB_ref_L2

Number of Data TLB referenced for L2 cache per profiling interval.

CPU_Ifetch_to_PCX

Number of CPU Instruction fetches to PCX (Processor to Cache) per profiling interval.

MD5_SHA1_SHA256_OP

Number of MD5_SHA1_SHA256 operations executed per profiling interval.

MD5_SHA1_SHA256_Busy_cycle

Number of busy cycles encountered per profiling interval when attempted to execute the MD5_SHA1_SHA256 operation.

DTLB_miss

Number of Data TLB misses (for L1 cache) per profiling interval.

TLB_miss

Number of TLB (for L1 cache) misses per profiling interval.

All_instr

Total number of instructions executed per profiling interval.

Userdata.1

Average number taken from the User Defined Data1 column.

Userdata.2

Average number taken from the User Defined Data2 column.


Performance Parameters Calculations

You can use the output values of the summarized data to derive various important performance parameters. This section lists performance parameters and the method from which they are derived.

This can be obtained from the Userdata.1 field.

Sun UltraSPARC T1 Processor

Instructions per Packet:

Average number of instructions executed in a packet.

Formula: value = (Instr_cnt / pkts_per_interval)

Instructions per Cycle (IPC):

Average number of instructions executed per cycle.

Formula: value = (Instr_cnt / cycle)

Packet Rate:

Average number of packets executed per second (in Kilo-packets per second).

Formula: value = ((pkts_per_interval / (cycle / cpu_frequency)) / 1000)

SB_full per thousand instructions:

Average number of SB_full occurrences per 1000 instructions executed.

Formula: value = ((SB_full / Instr_cnt) * 1000)

FP_instr_cnt per thousand instructions:

Average number of FP_instr_cnt occurrences per 1000 instructions executed.

Formula: value = ((FP_Instr_cnt / Instr_cnt) * 1000)

IC_miss per thousand instructions:

Average number of IC_miss occurrences per 1000 instructions executed.

Formula: value = ((IC_miss / Instr_cnt) * 1000)

DC_miss per thousand instructions:

Average number of DC_miss occurrences per 1000 instructions executed.

Formula: value = ((DC_miss / Instr_cnt) * 1000)

ITLB_miss per thousand instructions:

Average number of ITLB_miss occurrences per 1000 instructions executed.

Formula: value = ((ITLB_miss / Instr_cnt) * 1000)

DTLB_miss per thousand instructions:

Average number of DTLB_miss occurrences per 1000 instructions executed.

Formula: value = ((DTLB_miss / Instr_cnt) * 1000)

L2_imiss per thousand instructions:

Average number of L2_miss occurrences per 1000 instructions executed.

Formula: value = ((L2_miss / Instr_cnt) * 1000)

L2_dmiss_LD per thousand instructions:

Average number of L2_Dmiss_LD occurrences per 1000 instructions executed.

Formula: value = ((L2_miss / Instr_cnt) * 1000)

Sun UltraSPARC T2 Processor

Instruction per Packet:

Average number of instructions executed in a packet.

Formula: value = (All_instr / pkts_per_interval)

Instructions per Cycle (IPC):

Average number of instructions executed per cycle.

Formula: value = (All_instr / cycle)



Note - The Sun UltraSPARC T2 processor has two pipelines in each core. The maximum IPC number of each pipeline is 1. Therefore, the maximum IPC number of each core is 2. Pipeline utilization is this number of each pipeline multiplied by 100%. For example, if the IPC is 0.8, then the pipeline utilization of that pipeline is 80%.


Store Instructions per Packet:

Average number of Store instructions executed per packet.

Formula: value = (Store_instr / pkts_per_interval)

Load Instructions per Packet:

Average number of Load instructions executed per packet.

Formula: value = (Load_instr / pkts_per_interval)

L2 Load misses per Packet:

Average number of L2 cache Load misses per packet.

Formula: value = (L2_load_misses / pkts_per_interval)

Icache misses per 1000 Packets:

Average number of L1 Icache misses per 1000 packet.

Formula: value = (Icache_misses x 1000) / pkts_per_interval)

Dcache misses per Packet:

Average number of L1 Icache misses per packet.

Formula: value = (Dcache_misses / pkts_per_interval)

Packet Rate:

Average number of packets executed per second (in Kilo-packets per second).

Formula: value = ((pkts_per_interval / (cycle / cpu_frequency)) / 1000)



Note - Not all possible parameters are shown here. You can derive any parameter with any formula using the data outputs from the summary.




Note - These formulas can easily be inserted into a spreadsheet program.



procedure icon  To Use a Spreadsheet For Performance Analysis

1. Open the summary file.

For example, an output_file.csv generated by profiler.pl.

2. Insert formulas into the spreadsheet.

3. Save the spreadsheet for future reference.


1 (TableFootnote) Tcc instructions that are cancelled due to encountering a higher-priority trap are still counted.
2 (TableFootnote) SB_full increments every cycle a strand (virtual processor) is stalled due to a full store buffer, regardless of whether other strands are able to keep the processor busy. The overflow trap for SB_full is not precise to the instruction following the event that occurs when ovfl is set. The trap might occur on the instruction following the event or the following two instructions.
3 (TableFootnote) Only floating-point instructions that execute in the shared FPU are counted. The following instructions are executed in the shared FPU: FADDS, FADDD, FSUBS, FSUBD, FMULS, FMULD, FDIVS, FDIVD, FSMULD, FSTOX, FDTOX, FXTOS, FXTOD, FITOS, FDTOS, FITOD, FSTOD, FSTOI, FDTOI, FCMPS, FCMPD, FCMPES, FCMPED.
4 (TableFootnote) L2 misses because stores cannot be counted by the performance instrumentation logic.