Go to main content

man pages section 3: Extended Library Functions, Volume 1

Exit Print View

Updated: July 2017
 
 

cpc_bind_curlwp (3CPC)

Name

cpc_bind_curlwp, cpc_bind_pctx, cpc_bind_cpu, cpc_unbind, cpc_request_preset, cpc_set_restart - bind request sets to hardware counters

Synopsis

cc [ flag… ] file
–lcpc [ library… ] 
#include <libcpc.h>

int cpc_bind_curlwp(cpc_t *
cpc, cpc_set_t *
set, uint_t flags);
int cpc_bind_pctx(
cpc_t *cpc, pctx_t *
pctx, id_t id, 
cpc_set_t *set,
     uint_t flags);
int cpc_bind_cpu(
cpc_t *cpc, processorid_t 
id, cpc_set_t *set,
     uint_t flags);
int cpc_unbind(cpc_t *
cpc, cpc_set_t *
set);
int cpc_request_preset(
cpc_t *cpc, int 
index, uint64_t preset);
int cpc_set_restart(
cpc_t *cpc, cpc_set_t *
set);

Description

These functions program the processor's hardware counters according to the requests contained in the set argument. If these functions are successful, then upon return the physical counters will have been assigned to count events on behalf of each request in the set, and each counter will be enabled as configured.

The cpc_bind_curlwp() function binds the set to the calling LWP. If successful, a performance counter context is associated with the LWP that allows the system to virtualize the hardware counters and the hardware sampling to that specific LWP.

By default, the system binds the set to the current LWP only. If the CPC_BIND_LWP_INHERIT flag is present in the flags argument, however, any subsequent LWPs created by the current LWP will inherit a copy of the request set. The newly created LWP will have its virtualized 64-bit counters initialized to the preset values specified in set, and the counters will be enabled and begin counting and sampling events on behalf of the new LWP. This automatic inheritance behavior can be useful when dealing with multithreaded programs to determine aggregate statistics for the program as a whole.

If the CPC_BIND_LWP_INHERIT flag is specified and any of the requests in the set have the CPC_OVF_NOTIFY_EMT flag set, the process will immediately dispatch a SIGEMT signal to the freshly created LWP so that it can preset its counters appropriately on the new LWP. For the CPC request, this initialization condition can be detected using cpc_set_sample(3CPC) and looking at the counter value for any requests with CPC_OVF_NOTIFY_EMT set. The value of any such counters will be UINT64_MAX. For the SMPL request, no special value returned by cpc_set_sample(3CPC) is prepared to tell the initialization condition of the freshly created LWP.

The cpc_bind_pctx() function binds the set to the LWP specified by the pctx- id pair, where pctx refers to a handle returned from libpctx and id is the ID of the desired LWP in the target process. If successful, a performance counter context is associated with the specified LWP and the system virtualizes the hardware counters to that specific LWP. The flags argument is reserved for future use and must always be 0.

The cpc_bind_cpu() function binds the set to the specified CPU and measures events occurring on that CPU regardless of which LWP is running. Only one such binding can be active on the specified CPU at a time. As long as any application has bound a set to a CPU, per-LWP counters are unavailable and any attempt to use either cpc_bind_curlwp() or cpc_bind_pctx () returns EAGAIN.

The purpose of the flags argument is to modify the behavior of cpc_bind_cpu() to adapt to different calling strategies.

Values for the flags argument are defined in libcpc.h as follows:

#define CPC_FLAGS_DEFAULT 0
#define CPC_FLAGS_NORELE  0x01
#define CPC_FLAGS_NOPBIND 0x02

When flags is set to CPC_FLAGS_DEFAULT, the library binds the calling LWP to the measured CPU with processor_bind(2). The application must not change its processor binding until after it has unbound the set with cpc_unbind().

The remaining flags may be used individually or bitwise-OR'ed together.

When only CPC_FLAGS_NORELE is asserted, the library binds the set to the measured CPU using processor_bind(). When the set is unbound using cpc_unbind(), the library will unbind the set but will not unbind the calling thread from the measured CPU.

When only CPC_FLAGS_NOPBIND is asserted, the library does not bind the calling thread the measured CPU when binding the counter set, with the expectation that the calling thread is already bound to the measured CPU. If the thread is not bound to the CPU, the function will fail. When the set is unbound using cpc_unbind(), the library will unbind the set and the calling thread from the measured CPU.

If both flags are asserted (CPC_FLAGS_NOPBIND| CPC_FLAGS_NORELE), the set is bound and unbound from the measured CPU but the calling thread's CPU binding is never altered.

The intended use of CPC_FLAGS_NOPBIND and CPC_FLAGS_NORELE is to allow a thread to cycle through a collection of counter sets without incurring overhead from altering the calling thread's CPU binding unnecessarily.

The cpc_request_preset() function updates the preset and current value stored in the indexed request within the currently bound set, thereby changing the starting value for the specified request for the calling LWP only, which takes effect at the next call to cpc_set_restart().

When a performance counter counting on behalf of a request with the CPC_OVF_NOTIFY_EMT flag set overflows, the performance counters are frozen and the LWP to which the set is bound receives a SIGEMT signal. The cpc_set_restart() function can be called from a SIGEMT signal handler function to quickly restart the hardware counters. Counting begins from each request's original preset (see cpc_set_add_request(3CPC)), or from the preset specified in a prior call to cpc_request_preset (). Applications performing performance counter overflow profiling should use the cpc_set_restart() function to quickly restart counting after receiving a SIGEMT overflow signal and recording any relevant program state.

When a hardware sampling for a SMPL request with the CPC_OVF_NOTIFY_EMT flag set collected the requested number of SMPL records, the LWP to which the set is bound receives a SIGEMT signal, but the hardware sampling would not be frozen unlike the CPC request. In the signal handler for the SIGEMT, if the application wants to temporarily stop the hardware sampling, cpc_disable(3CPC) can be called to stop the hardware sampling. And, cpc_enable(3CPC) can be called to restart the hardware sampling.

The cpc_unbind() function unbinds the set from the resource to which it is bound. All hardware resources associated with the bound set are freed. If the set was bound to a CPU, the calling LWP is unbound from the corresponding CPU according to the policy requested when the set was bound using cpc_bind_cpu().

Return Values

Upon successful completion these functions return 0. Otherwise, -1 is returned and errno is set to indicate the error.

Errors

Applications wanting to get detailed error values should register an error handler with cpc_seterrhndlr(3CPC). Otherwise, the library will output a specific error description to stderr.

These functions will fail if:

EACCES

For cpc_bind_curlwp(), the system has Pentium 4 processors with HyperThreading and at least one physical processor has more than one hardware thread online. See NOTES.

For cpc_bind_cpu(), the process does not have the cpc_cpu privilege to access the CPU's counters.

For cpc_bind_curlwp(), cpc_bind_cpc(), and cpc_bind_pctx(), access to the requested hypervisor event was denied.

EAGAIN

For cpc_bind_curlwp() and cpc_bind_pctx (), the performance counters are not available for use by the application.

For cpc_bind_cpu(), another process has already bound to this CPU. Only one process is allowed to bind to a CPU at a time and only one set can be bound to a CPU at a time.

EINVAL

The set does not contain any requests or cpc_set_add_request () was not called.

The value given for an attribute of a request is out of range.

The system could not assign a physical counter to each request in the system. See NOTES.

One or more requests in the set conflict and might not be programmed simultaneously.

The set was not created with the same cpc handle.

For cpc_bind_cpu(), the specified processor does not exist.

For cpc_unbind(), the set is not bound.

For cpc_request_preset() and cpc_set_restart (), the calling LWP does not have a bound set.

ENOSYS

For cpc_bind_cpu(), the specified processor is not online.

ENOTSUP

The cpc_bind_curlwp() function was called with the CPC_OVF_NOTIFY_EMT flag, but the underlying processor is not capable of detecting counter overflow.

ESRCH

For cpc_bind_pctx(), the specified LWP in the target process does not exist.

Examples

Example 1 Use hardware performance counters to measure events in a process.

The following example demonstrates how a standalone application can be instrumented with the libcpc(3LIB) functions to use hardware performance counters to measure events in a process. The application performs 20 iterations of a computation, measuring the counter values for each iteration. By default, the example makes use of two counters to measure external cache references and external cache hits. These options are only appropriate for UltraSPARC processors. By setting the EVENT0 and EVENT1 environment variables to other strings (a list of which can be obtained from the –h option of the cpustat(1M) or cputrack(1) utilities), other events can be counted. The error() routine is assumed to be a user-provided routine analogous to the familiar printf(3C) function from the C library that also performs an exit(2) after printing the message.

#include <inttypes.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <libcpc.h>
#include <errno.h>

int
main(int argc, char *argv[])
{
int iter;
char *event0 = NULL, *event1 = NULL;
cpc_t *cpc;
cpc_set_t *set;
cpc_buf_t *diff, *after, *before;
int ind0, ind1;
uint64_t val0, val1;

if ((cpc = cpc_open(CPC_VER_CURRENT)) == NULL)
        error("perf counters unavailable: %s", strerror(errno));

if ((event0 = getenv("EVENT0")) == NULL)
     event0 = "EC_ref";
if ((event1 = getenv("EVENT1")) == NULL)
     event1 = "EC_hit";

if ((set = cpc_set_create(cpc)) == NULL)
        error("could not create set: %s", strerror(errno));

if ((ind0 = cpc_set_add_request(cpc, set, event0, 0, CPC_COUNT_USER, 0,
        NULL)) == -1)
        error("could not add first request: %s", strerror(errno));

if ((ind1 = cpc_set_add_request(cpc, set, event1, 0, CPC_COUNT_USER, 0,
        NULL)) == -1)
        error("could not add first request: %s", strerror(errno));

if ((diff = cpc_buf_create(cpc, set)) == NULL)
        error("could not create buffer: %s", strerror(errno));
if ((after = cpc_buf_create(cpc, set)) == NULL)
        error("could not create buffer: %s", strerror(errno));
if ((before = cpc_buf_create(cpc, set)) == NULL)
        error("could not create buffer: %s", strerror(errno));

if (cpc_bind_curlwp(cpc, set, 0) == -1)
         error("cannot bind lwp%d: %s", _lwp_self(), strerror(errno));

for (iter = 1; iter <= 20; iter++) {

        if (cpc_set_sample(cpc, set, before) == -1)
             break;

         /* ==> Computation to be measured goes here <== */
 
        if (cpc_set_sample(cpc, set, after) == -1)
             break;
 
        cpc_buf_sub(cpc, diff, after, before);
        cpc_buf_get(cpc, diff, ind0, &val0);
        cpc_buf_get(cpc, diff, ind1, &val1);
        
         (void) printf("%3d: %" PRId64 " %" PRId64 "\n", iter,
                val0, val1);
}

 if (iter != 21)
        error("cannot sample set: %s",  strerror(errno));

cpc_close(cpc);

return (0);
}
Example 2 Write a signal handler to catch overflow signals.

The following example builds on Example 1 and demonstrates how to write the signal handler to catch overflow signals. A counter is preset so that it is 1000 counts short of overflowing. After 1000 counts the signal handler is invoked.

The signal handler:

cpc_t     *cpc;
cpc_set_t *set;
cpc_buf_t *buf;
int       index;

void
emt_handler(int sig, siginfo_t *sip, void *arg)
{
     ucontext_t *uap = arg;
     uint64_t val;

     if (sig != SIGEMT || sip->si_code != EMT_CPCOVF) {
         psignal(sig, "example");
         psiginfo(sip, "example");
         return;
     }   

     (void) printf("lwp%d - si_addr %p ucontext: %%pc %p %%sp %p\n",
         _lwp_self(), (void *)sip->si_addr,
         (void *)uap->uc_mcontext.gregs[PC],
         (void *)uap->uc_mcontext.gregs[SP]);

     if (cpc_set_sample(cpc, set, buf) != 0)
         error("cannot sample: %s", strerror(errno));

     cpc_buf_get(cpc, buf, index, &val);

     (void) printf("0x%" PRIx64"\n", val);
     (void) fflush(stdout);

     /*
     * Update a request's preset and restart the counters. Counters which 
     * have not been preset with cpc_request_preset() will resume counting 
     * from their current value. 
     */
     (cpc_request_preset(cpc, ind1, val1) != 0) 
        error("cannot set preset for request %d: %s", ind1, 
             strerror(errno)); 
        if (cpc_set_restart(cpc, set) != 0)                                   
             error("cannot restart lwp%d: %s", _lwp_self(), strerror(errno));
}

The setup code, which can be positioned after the code that opens the CPC library and creates a set:

#define PRESET (UINT64_MAX - 999ull)
 
     struct sigaction act;
     ... 
     act.sa_sigaction = emt_handler;
     bzero(&act.sa_mask, sizeof (act.sa_mask));
     act.sa_flags = SA_RESTART|SA_SIGINFO;
     if (sigaction(SIGEMT, &act, NULL) == -1)
         error("sigaction: %s", strerror(errno));
 
     if ((index = cpc_set_add_request(cpc, set, event, PRESET,
        CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT, 0, NULL)) != 0)
        error("cannot add request to set: %s", strerror(errno));
 
     if ((buf = cpc_buf_create(cpc, set)) == NULL)
        error("cannot create buffer: %s", strerror(errno));
 
     if (cpc_bind_curlwp(cpc, set, 0) == -1)
         error("cannot bind lwp%d: %s", _lwp_self(), strerror(errno));
 
     for (iter = 1; iter <= 20; iter++) {
         /* ==> Computation to be measured goes here <== */
     }
 
     cpc_unbind(cpc, set);      /* done */
Example 3 Use Hardware Performance Counters and Hardware Sampling to Measure Events in a Process

The following example demonstrates how a standalone application can be instrumented with the libcpc(3LIB) functions to use hardware performance counters and hardware sampling to measure events in a process on an Intel platform supporting the Precise Event Based Sampling (PEBS). The sample code binds two monitoring events for the hardware performance counters and two monitoring events for the hardware sampling to the current thread. If any monitoring request caused an overflow, the signal handler invoked by a SIGEMT signal retrieves the monitoring results. When the sample code finishes the task that would be coded in the section commented as Do something here, the sample code retrieves the monitoring results and closes the session.

 
 #include <stdio.h>
 #include <libcpc.h>
 #include <unistd.h>
 #include <stdlib.h>
 #include <errno.h>
 
 #define   NEVENTS   4
 
 #define   EVENT0    "mem_uops_retired.all_loads"
 #define   EVENT1    "mem_uops_retired.all_stores"
 #define   EVENT2    "uops_retired.all"
 #define   EVENT3    "mem_trans_retired.load_latency"
 
 #define   RATIO0    0x100000ULL
 #define   RATIO1    0x100000ULL
 #define   RATIO2    0x100000ULL
 #define   RATIO3    0x100000ULL
 
 #define   PRESET_VALUE0  (UINT64_MAX - RATIO0)
 #define   PRESET_VALUE1  (UINT64_MAX - RATIO1)
 #define   PRESET_VALUE2  (UINT64_MAX - RATIO2)
 #define   PRESET_VALUE3  (UINT64_MAX - RATIO3)
 
 typedef struct _rec_names {
      const char     *name;
      int       index;
      struct _rec_names   *next;
 } rec_names_t;

 typedef struct _rec_items {
      uint_t         max_idx;
      rec_names_t    *rec_names;
 } rec_items_t;

 typedef struct {
      char      *event;
      uint64_t  preset;
      uint_t         flag;
      cpc_attr_t     *attr;
      int       nattr;
      int       *recitems;
      uint_t         rec_count;
      int       idx;
      int       nrecs;
      rec_items_t    *ri;
 } events_t;

 static cpc_attr_t attr2[] = {{ "smpl_nrecs", 50 }};
 static cpc_attr_t attr3[] = {{ "smpl_nrecs", 10 }, { "ld_lat_threshold", 100 }};

 static events_t events[NEVENTS] = {
      {
           EVENT0, PRESET_VALUE0,
           CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT,
           NULL, 0, NULL, 0, 0, 0
      },
      {
           EVENT1, PRESET_VALUE1,
           CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT,
           NULL, 0, NULL, 0, 0, 0
      },
      {
           EVENT2, PRESET_VALUE2,
           CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT | CPC_HW_SMPL,
           attr2, 1, NULL, 0, 0, 0
      },
      {
           EVENT3, PRESET_VALUE3,
           CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT | CPC_HW_SMPL,
           attr3, 2, NULL, 0, 0, 0
      }
 };

 static int          err;
 static cpc_t        *cpc;
 static cpc_set_t    *cpc_set;
 static cpc_buf_t    *cpc_buf_sig;

 /* ARGSUSED */
 static void
 mk_rec_items(void *arg, cpc_set_t *set, int request_index, const char *name,
     int rec_idx)
 {
      events_t  *ev = (events_t *)arg;
      rec_names_t    *p, *q, *nn;

      if ((nn = malloc(sizeof (rec_names_t))) == NULL)
           return;

      nn->name = name;
      nn->index = rec_idx;

      p = NULL;
      q = ev->ri->rec_names;
      while (q != NULL) {
           if (rec_idx < q->index)
                break;
           p = q;
           q = q->next;
      }
      nn->next = q;
      if (p == NULL)
           ev->ri->rec_names = nn;
      else
           p->next = nn;

      if (ev->ri->max_idx < rec_idx)
           ev->ri->max_idx = rec_idx;
 }

 static rec_names_t *
 find_recitem(events_t *ev, int index)
 {
      rec_names_t    *p = ev->ri->rec_names;
      while (p != NULL) {
           if (p->index == index)
                return (p);
           else if (p->index > index)
                return (NULL);
           else
                p = p->next;
      }
      return (NULL);
 }

 static int
 setup_recitems(events_t *ev)
 {
      if ((ev->ri = calloc(1, sizeof (rec_items_t))) == NULL)
           return (-1);
      errno = 0;
      cpc_walk_smpl_recitems_req(cpc, cpc_set, ev->idx, ev, mk_rec_items);
      if (errno != 0)
           return (-1);
      return (0);
 }

 static void
 show_record(uint64_t *rec, events_t *ev)
 {
      rec_names_t    *item;
      int  i;

      (void) printf("----------------------------------\en");
      for (i = 0; i <= ev->ri->max_idx; i++) {
           if ((item = find_recitem(ev, i)) == NULL) {
                continue;
           }
           (void) printf("%02d: \"%s\": 0x%" PRIx64 "\en",
               i, item->name, rec[i]);
      }
      (void) printf("----------------------------------\en");
 }

 static void
 show_buf_header(cpc_buf_t *buf)
 {
      hrtime_t  ht;
      uint64_t  tick;

      (void) printf("***************** results *****************\en");
      ht = cpc_buf_hrtime(cpc, buf);
      (void) printf("hrtime: %" PRId64 \en", ht);
      tick = cpc_buf_tick(cpc, buf);
      (void) printf("tick: %" PRIu64 \en", tick);
 }

 static void
 show_cpc_buf(cpc_buf_t *buf, events_t *ev)
 {
      uint64_t  val;

      (void) printf("Req#%d:"\en", ev->idx);
      if (cpc_buf_get(cpc, buf, ev->idx, &val) != 0) {
           err = 1;
           return;
      }
      (void) printf(" counter val: 0x%" PRIx64, val);
      if (val < ev->preset)
           (void) printf(" : overflowed\en");
      else
           (void) printf("\en");
 }

 static void
 show_smpl_buf(cpc_buf_t *buf, events_t *ev)
 {
      uint64_t  *recb;
      int       i;

      (void) printf("Req#%d:\en", ev->idx);
      (void) printf(" retrieved count: %u", ev->rec_count);
      if (ev->rec_count == ev->nrecs)
           (void) printf(" : overflowed\en");
      else
           (void) printf("\en");

      for (i = 0; i < ev->rec_count; i++) {
           recb = cpc_buf_smpl_get_record(cpc, buf, ev->idx, i);
           if (recb == NULL) {
                err = 1;
                return;
           }
           show_record(recb, ev);
      }
 }

 static int
 retrieve_results(cpc_buf_t *buf)
 {
      int  i;
      int  repeat = 0;

      if (cpc_set_sample(cpc, cpc_set, buf) != 0) {
           return (-1);
      }

      show_buf_header(buf);

      /* Show CPC results */
      for (i = 0; i < NEVENTS; i++) {
           if (!(events[i].flag & CPC_HW_SMPL)) {
                /* CPC request */
                show_cpc_buf(buf, &events[i]);
                continue;
           }
           /* SMPL request */
           if (cpc_buf_smpl_rec_count(cpc, buf,
               events[i].idx, &events[i].rec_count) != 0) {
                return (-1);
           }
           if (events[i].rec_count > 0)
                show_smpl_buf(buf, &events[i]);
           if (events[i].rec_count == events[i].nrecs)
                repeat++;
      }

      /* Show remaining SMPL results */
      while (repeat > 0) {
           if (cpc_set_sample(cpc, cpc_set, buf) != 0)
                return (-1);
           repeat = 0;
           for (i = 0; i < NEVENTS; i++) {
                if (!(events[i].flag & CPC_HW_SMPL)) {
                     /* CPC request */
                     continue;
                }
                if (cpc_buf_smpl_rec_count(cpc, buf,
                    events[i].idx, &events[i].rec_count) != 0) {
                     return (-1);
                }
                if (events[i].rec_count > 0) {
                     (void) printf("For req#%d, more than 1 "
                         "retrieval of the sampling results "
                         "were required. Consider to adjust "
                         "the preset value and smpl_nrecs "
                         "value.\en", i);
                     show_smpl_buf(buf, &events[i]);
                }
                if (events[i].rec_count == events[i].nrecs)
                     repeat++;
           }
      }
      /* flushed all SMPL results */

      return (0);
 }

 /* ARGSUSED */
 static void
 sig_handler(int sig, siginfo_t *sip, void *arg)
 {
      (void) fprintf(stdout, "signal handler called\en");
      if (sig != SIGEMT || sip == NULL || sip->si_code != EMT_CPCOVF) {
           err = 1;
           return;
      }
      /* Disable all requests */
      if (cpc_disable(cpc) != 0) {
           err = 1;
           return;
      }
      if (retrieve_results(cpc_buf_sig) != 0) {
           err = 1;
           return;
      }
      /* Enable all requests */
      if (cpc_enable(cpc) != 0) {
           err = 1;
           return;
      }
      /* Restart and reset requests */
      if (cpc_set_restart(cpc, cpc_set) != 0) {
           err = 1;
           return;
      }
 }

 int
 main(void)
 {
      struct sigaction    sa;
      events_t  *ev;
      cpc_buf_t *cpc_buf;
      int       i;
      int       result = 0;

      if ((cpc = cpc_open(CPC_VER_CURRENT)) == NULL) {
           (void) fprintf(stderr, "cpc_open() failed\en");
           exit(1);
      }

      if ((cpc_caps(cpc) & CPC_CAP_OVERFLOW_SMPL) == 0) {
           (void) fprintf(stderr, "OVERFLOW CAP is missing\en");
           result = -2;
           goto cleanup_close;
      }
      if ((cpc_caps(cpc) & CPC_CAP_SMPL) == 0) {
           (void) fprintf(stderr, "HW SMPL CAP is missing\en");
           result = -2;
           goto cleanup_close;
      }
      if ((cpc_set = cpc_set_create(cpc)) == NULL) {
           (void) fprintf(stderr, "cpc_set_create() failed\en");
           result = -2;
           goto cleanup_close;
      }
      for (i = 0; i < NEVENTS; i++) {
           ev = &events[i];
           if (ev->flag & CPC_HW_SMPL) {
                ev->nrecs = ev->attr[0].ca_val;
           }
           ev->idx = cpc_set_add_request(cpc, cpc_set,
               ev->event, ev->preset, ev->flag, ev->nattr, ev->attr);
           if (ev->idx < 0) {
                (void) fprintf(stderr,
                    "cpc_set_add_request() failed\en");
                result = -2;
                goto cleanup_set;
           }
           if (ev->flag & CPC_HW_SMPL) {
                if (setup_recitems(ev) != 0) {
                     (void) fprintf(stderr,
                         "setup_recitems() failed\en");
                     result = -2;
                     goto cleanup_set;
                }
           }
      }

      if ((cpc_buf = cpc_buf_create(cpc, cpc_set)) == NULL) {
           (void) fprintf(stderr, "cpc_buf_create() failed\en");
           result = -2;
           goto cleanup_set;
      }

      if ((cpc_buf_sig = cpc_buf_create(cpc, cpc_set)) == NULL) {
           (void) fprintf(stderr, "cpc_buf_create() failed\en");
           result = -2;
           goto cleanup_set;
      }

      sa.sa_sigaction = sig_handler;
      sa.sa_flags = SA_RESTART | SA_SIGINFO;
      (void) sigemptyset(&sa.sa_mask);
      if (sigaction(SIGEMT, &sa, NULL) != 0) {
           (void) fprintf(stderr, "sigaction() failed\en");
           result = -2;
           goto cleanup_set;
      }

      if (cpc_bind_curlwp(cpc, cpc_set, 0) != 0) {
           (void) fprintf(stderr, "cpc_bind_curlwp() failed\en");
           result = -2;

           goto cleanup_set;
      }

      /*
       * ==================
       * Do something here.
       * ==================
       */

      if (err) {
           (void) fprintf(stderr, "Error happened\en");
           result = -2;
           goto cleanup_bind;
      }

      (void) cpc_disable(cpc);

      if (retrieve_results(cpc_buf) != 0) {
           (void) fprintf(stderr, "retrieve_results() failed\en");
           result = -2;
           goto cleanup_bind;
      }

 cleanup_bind:
      (void) cpc_unbind(cpc, cpc_set);
 cleanup_set:
      (void) cpc_set_destroy(cpc, cpc_set);
 cleanup_close:
      (void) cpc_close(cpc);

      return (result);
 }

Attributes

See attributes(5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Interface Stability
Committed
MT-Level
Safe

See Also

cpustat(1M), cputrack(1), psrinfo(1M), processor_bind(2), cpc_seterrhndlr(3CPC), cpc_set_sample(3CPC), libcpc(3LIB), attributes(5)

Notes

When a set is bound, the system assigns a physical hardware counter to count on behalf of each request in the set. If such an assignment is not possible for all requests in the set, the bind function returns -1 and sets errno to EINVAL. The assignment of requests to counters depends on the capabilities of the available counters. Some processors (such as Pentium 4) have a complicated counter control mechanism that requires the reservation of limited hardware resources beyond the actual counters. It could occur that two requests for different events might be impossible to count at the same time due to these limited hardware resources. See the processor manual as referenced by cpc_cpuref(3CPC) for details about the underlying processor's capabilities and limitations.

Some processors can be configured to dispatch an interrupt when a physical counter overflows. The most obvious use for this facility is to ensure that the full 64-bit counter values are maintained without repeated sampling. Certain hardware, such as the UltraSPARC processor, does not record which counter overflowed. A more subtle use for this facility is to preset the counter to a value slightly less than the maximum value, then use the resulting interrupt to catch the counter overflow associated with that event. The overflow can then be used as an indication of the frequency of the occurrence of that event.

The interrupt generated by the processor might not be particularly precise. That is, the particular instruction that caused the counter overflow might be earlier in the instruction stream than is indicated by the program counter value in the ucontext.

When a CPC request is added to a set with the CPC_OVF_NOTIFY_EMT flag set, then as before, the control registers and counter are preset from the 64-bit preset value given. When the flag is set, however, the kernel arranges to send the calling process a SIGEMT signal when the overflow occurs. The si_code member of the corresponding siginfo structure is set to EMT_CPCOVF and the si_addr member takes the program counter value at the time the overflow interrupt was delivered. Counting is disabled until the set is bound again.

When a SMPL request is added to a set with the CPC_OVF_NOTIFY_EMT flag set, then as before, the control registers and counter for the sampling are preset from the 64-bit preset value given. When the flag is set, however, the kernel arranges to send the calling process a SIGEMT signal when the hardware collected the requested number of SMPL records for the SMPL request. The si_code member of the corresponding siginfo structure is set to EMT_CPCOVF and the si_addr member takes the program counter value at the time the overflow interrupt for the sampling hardware was delivered. Sampling is kept enabled.

If the CPC_CAP_OVERFLOW_PRECISE bit is set in the value returned by cpc_caps(3CPC), the processor is able to determine precisely which counter has overflowed after receiving the overflow interrupt. On such processors, the SIGEMT signal is sent only if a counter overflows and the request that the counter is counting has the CPC_OVF_NOTIFY_EMT flag set. If the capability is not present on the processor, the system sends a SIGEMT signal to the process if any of its requests have the CPC_OVF_NOTIFY_EMT flag set and any counter in its set overflows.

Different processors have different counter ranges available, though all processors supported by Solaris allow at least 31 bits to be specified as a counter preset value. Portable preset values lie in the range UINT64_MAX to UINT64_MAX- INT32_MAX.

The appropriate preset value will often need to be determined experimentally. Typically, this value will depend on the event being measured as well as the desire to minimize the impact of the act of measurement on the event being measured. Less frequent interrupts and samples lead to less perturbation of the system.

If the processor cannot detect counter overflow, bind will fail and return ENOTSUP. Only user events can be measured using this technique. See Example 2.

Pentium 4

Most Pentium 4 events require the specification of an event mask for counting. The event mask is specified with the emask attribute.

Pentium 4 processors with HyperThreading Technology have only one set of hardware counters per physical processor. To use cpc_bind_curlwp () or cpc_bind_pctx() to measure per- LWP events on a system with Pentium 4 HT processors, a system administrator must first take processors in the system offline until each physical processor has only one hardware thread online (See the – p option to psrinfo(1M)). If a second hardware thread is brought online, all per-LWP bound contexts will be invalidated and any attempt to sample or bind a CPC set will return EAGAIN .

Only one CPC set at a time can be bound to a physical processor with cpc_bind_cpu(). Any call to cpc_bind_cpu () that attempts to bind a set to a processor that shares a physical processor with a processor that already has a CPU-bound set returns an error.

To measure the shared state on a Pentium 4 processor with HyperThreading, the count_sibling_usr and count_sibling_sys attributes are provided for use with cpc_bind_cpu (). These attributes behave exactly as the CPC_COUNT_USER and CPC_COUNT_SYSTEM request flags, except that they act on the sibling hardware thread sharing the physical processor with the CPU measured by cpc_bind_cpu(). Some CPC sets will fail to bind due to resource constraints. The most common type of resource constraint is an ESCR conflict among one or more requests in the set. For example, the branch_retired event cannot be measured on counters 12 and 13 simultaneously because both counters require the CRU_ESCR2 ESCR to measure this event. To measure branch_retired events simultaneously on more than one counter, use counters such that one counter uses CRU_ESCR2 and the other counter uses CRU_ESCR3. See the processor documentation for details.