cmi_ini, atm_cas, cmi_get_error, attr_get, cflush, close_fb, cmi_ctl, cmi_enb, evt_get, evt_ret, fini, flush_fb, ini_th, mb_fn, open_fb, rmb_fn, rseg_del, seg_at, seg_ctl, seg_dt, seg_exp, seg_get, seg_imp, tok_del, tok_new, wmb_fn - Coherent Memory Interface
#include <cmi.h> cmi_ctxt *cmi_ini (uint16_t verno, cmi_cbs *callback) cmi_error cmi_get_error (cmi_ctxt *ctxt) int atm_cas(cmi_ctxt *ctxt, uint64_t *addr, uint64_t cmpval, uint64_t swpval, uint64_t *rval) int attr_get(cmi_ctxt *ctxt, cmi_seg seg, int32_t cmd, void *optval, size_t *optlen) int cflush(cmi_ctxt *ctxt, void *vaddr[], int32_t addrcnt) int close_fb(cmi_ctxt *ctxt, cmi_fb epoch) int cmi_ctl(cmi_ctxt *ctxt, int32_t cmd, cmi_ctl_cfg *cfg) int cmi_enb(cmi_ctxt *ctxt, int enable) cmi_event* evt_get(cmi_ctxt *ctxt) void evt_ret(cmi_event *event, cmi_event_ret status) int fini(cmi_ctxt *ctxt) int flush_fb(cmi_ctxt *ctxt, cmi_fb epoch) int ini_th(cmi_ctxt *ctxt) int mb_fn(cmi_ctxt *ctxt); cmi_fb open_fb(cmi_ctxt *ctxt) int rmb_fn(cmi_ctxt *ctxt); int rseg_del(cmi_ctxt *ctxt, cmi_rseg *rseg) void* seg_at(cmi_ctxt *ctxt, cmi_seg seg, void *addr, int32_t flags) int seg_ctl(cmi_ctxt *ctxt, cmi_seg seg, int32_t cmd, cmi_ds *ds) int seg_dt(cmi_ctxt *ctxt, cmi_seg seg, void *addr) cmi_rseg* seg_exp(cmi_ctxt *ctxt, cmi_seg seg, int32_t attrib) cmi_seg seg_get(cmi_ctxt *ctxt, size_t size, int32_t flags) cmi_seg seg_imp(cmi_ctxt *ctxt, cmi_rseg *rseg) int tok_del(cmi_ctxt *ctxt, cmi_token *tok) cmi_token* tok_new(cmi_ctxt *ctxt, cmi_seg seg, cmi_naddr *naddr, cmi_acc flags) int wmb_fn(cmi_ctxt *ctxt);
Coherent Memory Interface (CMI) exposes a Distributed Shared Memory abstraction. In order to support a diverse range of networking technologies, vendors, and approaches to CMI (hardware vs emulated 'Soft CMI' for instance) a portable interface is required. CMI clients (and possibly compiler backends) interact with the CMI using the Application Programming Interface (API) outlined in this document.
Only cmi_ini() and cmi_get_error() are available externally.
The rest functions are referenced by the function pointers defined in
cmi_ctxt, which is returned by cmi_ini().
To reference the rest functions, use the macro below, which is defined
in /usr/include/cmi.h.
CMIFN(ctxt,ver,fn)
Here is an example of how to call seg_get() for API version 10.
ctxt = cmi_ini(10, NULL);
seg = CMIFN(ctxt, 10, seg_get)(ctxt, size, flags);
Platform specific CMI interfaces are defined in cmi_impl.h, which is a
platform specific header file located at /usr/include/cmi/<vendor name>
These interfaces are platform specific implementations, which are
optimized for the particular platform and/or support the platform
specific features.
To utilize such interfaces, the platform specific cmi_impl.h need to be
included.
All exceptions occurred on CMI segments will generate SIGSEGV with
SEGV_CMI in si_code. CMI clients identify the type of CMI exception by
si_errno and locate the faulty segment id by si_id.
si_addr and si_pc are assigned the virtual address and the PC where the
exception occurred.
For more detail, see CMI Exceptions in cmi(5).
cmi_ctxt* cmi_ini (uint16_t verno, cmi_cbs *callback)
Initialize the CMI library
Parameters:
verno Client requested API version number
callback CMI client callbacks.
Returns:
ctxt CMI context for process. NULL on failure.
CMI_ERR_NOTSUPP Requested API version is not supported
CMI_ERR_NOMEM Out of resources allocating context (Such as the
maximum number of contexts are already open)
CMI_ERR_BOUND Calling thread is already associated with context
This routine is called once per process to initialize the CMI library.
This will be the first CMI routine invoked by a process. A successful
call to cmi_ini() implicitly associates the calling thread with the
context as if the thread had called ini_th(ctxt). A matching call to
fini() is required to deallocate any thread specific resources
allocated by the library.
The version number field passed while initializing the library is the
requested API version the client supports. The CMI library should set
the API version supported by the library in the cmi_ctxt handle. The
returned version should be the minimum of the client requested and
library supported version. CMI will maintain binary ABI compatibility
between minor releases. Major version releases may break binary
compatibility. In such cases it is acceptable for the initialization of
the library to fail and return a NULL handle. cmi_get_error() shall
return CMI_ERR_NOTSUPP as the error code.
Various CMI objects (context and segment handles, tokens etc.) are
allocated by the library however in certain environments CMI clients
may want to utilize special memory heaps to allocate CMI objects.
Clients may provide memory management callbacks that the library must
utilize for all memory allocation. CMI vendor library can assume that
the specified memory allocator is thread safe. If no callbacks are
specified the library can use any suitable mechanism (such as OS
provided allocation routines) for allocation. Memory allocation failures
from client callbacks should be handled similarly and return
CMI_ERR_NOMEM. The callback vector need to be valid during the duration
of this call. The library will cache the callback vector internally in
the CMI context handle after the call is done.
CMI libraries provide support for basic diagnosability to aid in
debugging issues in the fields. Clients can request CMI library to
trace one or more facilities (sub-components in CMI specification)
along with level of tracing ranging from CMI_TRACE_LVL_ERROR (error
conditions only) to CMI_TRACE_LVL_HIGHEST (verbose tracing). The
facilities to trace are specified in cmi_cbs.trace_facilities. The
following facilities are currently defined:
o CMI_TRACE_FAC_INI: Trace library/platform initialization calls such
as cmi_ini()/cmi_ini_th() and cmi_fini().
o CMI_TRACE_FAC_CTRL: Trace control operations in the library
(cmi_ctl(), seg_ctl(), attr_get()).
o CMI_TRACE_FAC_SEG: Trace segment operations such as seg_get(),
seg_exp(), seg_imp(), seg_at(), seg_dt(), rseg_del().
o CMI_TRACE_FAC_TOK: Trace all access token operations such as
tok_new(), and tok_del().
o CMI_TRACE_FAC_EVT: Trace all event related operations such as
evt_get() and evt_ret().
o CMI_TRACE_FAC_MEM: Trace all memory related operations such as flush
barriers (cmi_enb(), open_fb(), close_fb(), flush_fb()) and memory
barriers (cmi_rmb(), cmi_wmb() and cmi_mb()) along with cache flush
operations (cflush()) and atm_cas().
Note:
All operations traced by the library utilize the alert and log
callbacks provided by the client. If no such callback is provided
by the client then the library still trace the requested
facilities at the specified level to a platform specific location.
cmi_error cmi_get_error (cmi_ctxt *ctxt)
Return last error set for calling thread
Parameters:
ctxt CMI Context associated with thread. Can be NULL if the
cmi_ini() or ini_th() calls failed. In this case return value
should be the reason the allocation/association failed.
Returns:
cmi_error Last error set for thread.
Clients should obtain error code immediately following a failed
operation to avoid losing the error status as intervening operations
that succeed may clear the error.
int atm_cas(cmi_ctxt *ctxt, uint64_t *addr, uint64_t
cmpval, uint64_t swpval, uint64_t *rval)
Perform atomic Compare and Swap
Parameters:
ctxt CMI context
addr Address to perform Compare and Swap
cmpval Value to compare against
swpval Value to swap if compare succeeds
rval Address of location to store result
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context is invalid
Perform atomic compare and swap on provided address. If the
current value of *addr is cmpval then write swpval intp
*addr. The content of *addr before the operation is returned
in rval. The address atm_cas() is being performed on must be
within a segment. This segment may either be a remote CMI
segment (accessed after importing segment via seg_imp) or a
'local' CMI segment hosted on the same node and directly
attached to via seg_at(). If the atm_cas() operation is performed
on a remote segment (addr resides on a remote segment) the
atm_cas() operation cannot utilize cached values.
The result of the operation must be reflected on the
home node. If the segment is un-accessible or the calling
thread does not have access (access token revoked for
instance) then the operation can fail with
an access error/exception. The location of the result value
(rval parameter) should be in local memory.
Note:
This atomic operation is a store memory barrier i.e. all stores
preceding the atm_cas() are completed (globally visible) before
returning from this call.
int attr_get(cmi_ctxt *ctxt, cmi_seg seg, int32_t
cmd, void *optval, size_t *optlen)
Query CMI context/segment attributes
Parameters:
ctxt CMI context
seg CMI segment to get attribute of
cmd CMI context/segment attribute being queried
optval Buffer to receive attribute query result
optlen Size of the optval buffer
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context, segment or command is
invalid
CMI_ERR_NOMEM Optval buffer too small to hold result
The optval buffer is allocated by the client and used by the
library to return the attribute query result. The size of
the optval buffer on entry is set by the client in the
optlen parameter. If the size of the optval buffer is not
sufficient the query is failed with CMI_ERR_NOMEM and
optlen is updated to the required size to hold the result.
On successful return the size of the result is in the optlen
value result parameter.
The following attributes are currently defined:
o CMI_ATTR_RSEG_SIZE - Query the size of a remote segment handle.
Optval buffer should be of size_t size.
o CMI_ATTR_TOKEN_SIZE - Query the size of an access token.
Optval buffer should be of size_t size.
o CMI_ATTR_NODEADDR_SIZE - Query the size of a CMI node address.
Optval buffer should be of size_t size.
int cflush(cmi_ctxt *ctxt, void *vaddr[], int32_t addrcnt)
Flush specified addresses to home node
Parameters:
ctxt CMI context
vaddr Array of virtual addresses that need to be flushed
addrcnt Number of elements in vaddr array
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context is invalid
CMI_ERR_HW Hardware error prevented flushing stores to
home node
CMI clients can request the CMI system to flush cache lines
containing specified addresses to the home node using
cflush(). This is a synchronous operation and on return all
dirty cache lines for the addresses specified should be
flushed to their home node. If an access error is
encountered during the flush operation an exception is
thrown. The number of cache lines/addresses to flush is
specified in the addrcnt parameter. The cflush operation can
be initiated even if the calling thread does not have a valid
flush epoch. It is also legal to flush cache lines to home
node while a flush epoch is open. To optimize performance a CMI
system can avoid flushing cache lines on a flush epoch
that have already been flushed using cflush() as long as no
additional stores have occurred that modify the cache line.
int close_fb(cmi_ctxt *ctxt, cmi_fb epoch)
Close flush epoch for a thread
Parameters:
ctxt CMI context
epoch CMI epoch to close
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with context
CMI_ERR_INVAL Provided context or epoch is invalid
CMI_ERR_HW Hardware error prevented flushing stores to home node
Closing the flush epoch for the thread implicitly flushes
all pending changes back to the home node as if the thread
had invoked flush_fb(). The CMI system does not need to
track any further stores for the thread till a new flush
epoch is started.
int cmi_ctl(cmi_ctxt *ctxt, int32_t cmd, cmi_ctl_cfg *cfg)
Query/configure CMI system attributes
Parameters:
ctxt CMI context for calling thread
cmd Command operation being requested
cfg CMI Control structure for operation
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with context
CMI_ERR_INVAL Provided context or command is invalid
CMI_ERR_PERM Insufficient permissions for command execution
CMI_ERR_NOMEM Insufficient resources for memory reservation request
CMI_ERR_NOTSUPP Requested command/operation is not supported
CMI clients can query/configure various attributes of the
underlying CMI system. The cfg parameter is a value result
parameter. For commands that configure/set certain
attributes of underlying CMI implementation the client will
set the values being modified in the cfg parameter. For
commands that query CMI state/attributes the cfg parameter
will contain the result of the operation.
The following CMI control commands are defined:
o CMI_CTL_INFO: Obtain currently configured parameter values
and limits for the specified CMI context. Following
information is available:
o max_mem_avail: Maximum size of CMI memory currently
available. This is a snapshot of currently available
memory and can dynamically change as segments are
(de)allocated.
o max_mem_cfg: This is size of CMI memory configured for
the system.
o max_seg_sz: The maximum size of a CMI segment that can
be created. Similar to SHMMAX for shared memory segment.
o max_exp_segs : Maximum number of segments that can be
exported.
o max_imp_segs : Maximum number of segments that can be
imported.
o max_write_through_segs: Maxiumum number of write through
segments. Can be 0 if CMI implementation does not
support write through segments.
o max_acc_toks : Maximum number of access tokens that can
be active.
o max_acc_stok : Maximum number of access tokens per
segment.
o cur_exp_segs : Current number of segments that are
exported.
o cur_imp_segs : Current number of segments that are
imported.
o cache_line_sz: Cache line size used by CMI
implementation.
o max_reco_segsz: Maximum segment size that can be passed
to a single call to seg_ctl() when recovering segment
(CMI_SEG_CHECK | CMI_SEG_RECO). Must be a multiple of
cache_line_sz.
o prot_units: This is the granularity of access protection provided
by the platform in bytes. Clients must ensure that segment
allocation sizes are a multiple of this size in seg_get(), otherwise
segment allocation will fail.
o seg_alloc_units: Clients may benefit by allocating
segments in multiple of seg_alloc_unit sizes to minimize
internal fragmentation as platform allocates segments
that are multiple of this size internally. Platform can
set this value to 1 if memory fragmentation is not an
issue.
o seg_alignment: Alignment restrictions on platforms for normal
segments i.e. those segments not created with CMI_SEG_LARGE_PAGES,
where segment can be attached to. This is usually determined by any
OS specific restrictions based on underlying page size used for the
segment. Clients attempting to attach a normal segment at an
address that is not a multiple of this alignment size will fail
with error CMI_ERR_INVAL. On platforms that have no restrictions
this value can be set to 1.
o seg_lrgpg_alignment: Similar to seg_alignment for normal segment
this is the alignment restriction for segments created using
CMI_SEG_LARGE_PAGES. Clients must ensure that segments backed by
large pages are attached to an address that is multiple of this
size.
o CMI_CTL_CACHE_RSEG_SET: Some CMI systems may implement a
local cache to back remotely attached segments. As pages
are evicted from the cache they are synchronized with the
home node for the segment. This command allows a client to
set the size of the local cache to backup a given segment
(or potentially all segments). Only the process that
created or imported the segment can update size of local
cache.
o seg_hndl: Segment handle for which the local cache size
is being set. Can specify a valid segment handle
imported via seg_imp() or CMI_SEG_ALL for all remote
segments. CMI_SEG_ALL indicates that the mpool_sz be
applied to ALL segments exported AND imported by the
calling process.
o mpool_sz: Size of the backing cache in bytes for all (or
a specific segment).
o CMI_CTL_CACHE_RSEG_GET: Query the currently configured
size of the local cache backing remote segments.
o seg_hndl: Segment handle for which the local cache size
is being set. CMI_SEG_ALL cannot be used to query cache
size.
o mpool_sz: Size of the backing cache in bytes for
segment.
o CMI_CTL_RECONF_TOUT: Specify the maximum network
reconfiguration timeout after which an access exception
must be thrown for an operation.
o rcfg_tout: Reconfiguration timeout in milliseconds
o CMI_CTL_CLIENT_CONSIST: CMI client ensures data
consistency guarantees across network/node failures.
Client is not required to call seg_ctl() to perform
recovery of data. Client specific information, for e.g.
recovery logs, will be used to detect and recovery data
consistency across network/node failures.
o CMI_CTL_NODE_CMAP_GET: Retrieve the connectivity map for
all nodes that either have imported/exported segments to
this context. Since determining the connectivity map may
take some finite amount of time, the CMI platform must
deliver asynchronous notifications via cmi_einfo_cmap
event. A single notification per CMI_CTL_NODE_CMAP_GET
command must be delivered. Client must specify a non-zero
ctl_cfg_cmap_reqid for a request. The cmi_einfo_cmap event
contains the client provided request ID in the event.
o CMI_CTL_MEM_RESERVE: Request reservation of CMI memory for
context. If the context is not associated with any memory
reservation then a new memory reservation of specified
size is requested, or else this is a request to modify the
reservation associated with the calling context. The
cmi_ctl_cfg mreq structure contains the amount of memory
reservation requested for the calling context. For new
reservations the key is returned in rsv_key and the
context is implicitly associated with it. Clients can
modify a previously associated reservation by
increasing/decreasing the reservation amount. If the
currently allocated memory associated with reservation is
greater than the reduced reservation request then the
request will fail with CMI_ERR_INVAL. Similarly if the
requested memory reservation exceeds the available CMI
memory then the request will fail with CMI_ERR_NOMEM. In
all cases on return from cmi_ctl() the currently allocated
memory for reservation should be returned in
ctl_cfg_mem_allocd. See Memory Reservation in cmi(5)
for more information.
o CMI_CTL_MEM_RESERVE_DEL: Delete memory reservation
associated with the context. The context must have a
memory reservation associated with it previously either by
calling CMI_CTL_MEM_RESERVE_SET or allocating a context
using CMI_CTL_MEM_RESERVE. If no memory reservation is
associated with context then this call shall fail with
CMI_ERR_INVAL. See Memory Reservation in cmi(5) for more
information.
o CMI_CTL_MEM_RESERVE_SET: Associate a memory reservation
with context. All subsequent segment allocations will
utilize the specified memory reservation. Specifying a
reservation key of 0 results in the context being dis-
associated with any reservations. Subsequent segment
allocations may fail if out of resource. If the calling
context is already associated with a reservation then the
association is replaced with the new reservation being
set. Attempt to set a previously deleted or invalid
reservation will fail with CMI_ERR_INVAL and the current
reservation association for context is not changed.
o CMI_CTL_NODE_ADDR_CMP: Compare two or more node addresses.
Clients provide the node address to compare in
ctl_cfg_ncmp_naddr. Multiple node addresses to compare
against are provided in ctl_cfg_ncmp_caddr. The size of
the ctl_cfg_ncmp_caddr array on input is specified in
ctl_cfg_ncmp_addrs. On return from this routine the
ctl_cfg_ncmp_midx array, which is allocated by the caller
and is of size ctl_cfg_ncmp_addrs is populated with the
indexes of nodes in the ctl_cfg_ncmp_caddr that match the
node being compared. The number of matched node addresses
on return from this routine is specified in
ctl_cfg_ncmp_addrs. If any of the provided node addresses
are invalid then this routine shall fail with
CMI_ERR_INVAL.
int cmi_enb(cmi_ctxt *ctxt, int enable)
Enable/Disable CMI Segment access for calling thread
Parameters:
ctxt CMI context
enable Enable/Disable access to CMI segments
Returns:
0 on success. -1 on failure
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_NOMEM Out of resources enabling CMI access for
thread
Access to remote segments can be controlled on a per thread
basis using cmi_enb(). Enabling remote segment access for a
thread is different than setting the access token for a
segment on a remote node. The remote access token for a
segment is configured using seg_ctl() by the process that
imports the segment (seg_imp()). This access tokens for a
segment must be configured as a pre-requisite for any
process or thread on the remote node to access the segment.
Once the access token is configured by the importing process
all CMI processes and threads on the node can utilize the
token to access the segment. cmi_enb() on the other hand
enables/disables access to ANY CMI segments for the calling
thread. Each thread must explicitly enable access to CMI
segments before access ANY CMI remote segments.
Note:
A thread opens a CMI access epoch by invoking
cmi_enb(TRUE) and closes the access epoch by invoking
cmi_enb(FALSE). Access to CMI remote segments by a thread are
only permitted when the CMI access is open. Any access
to CMI segments (loads or stores) outside the epoch
result in an access violation exception (except
if the CMI implementation does not support per thread
enable/disable).
Explicitly enabling/disabling CMI segment access
provides an additional layer of security to guard
against errant memory accesses by a thread. CMI
applications may have well defined regions of code that
operate on CMI segments. Access to CMI can then be
enabled only for the required duration. If an
application does not wish to protect against errant
accesses to CMI segments then it can enable CMI segment
on startup and disable it on exit for all threads.
Access to CMI remote segments is controlled on a
per thread basis using cmi_enb(). Even though a segment
may be attached and a valid access token configured for it,
access to the segment is only allowed if the thread
explicitly enables it via call to cmi_enb().
cmi_event* evt_get(cmi_ctxt *ctxt)
Retrieve a notification event for context
Parameters:
ctxt CMI context
Returns:
Notification event or NULL if no notification events to
process
Retrieve a CMI notification event for a context. Certain
error conditions require the co-operation of the client and
the CMI system to recover the system and program state.
Event notifications for various error conditions are defined
in section CMI Events in cmi(5). Only CMI processes that create or
import segments on a node are required to handle event
notifications. Once an event has been processed/handled by
the process it is returned back to the library via
evt_ret().
Some event types require processing the event successfully.
Any status other than CMI_EVENT_RET_DONE for these events is
fatal and the calling process can terminate when returning
the event.
Note:
The event token is allocated by the CMI library and
ownership is transferred back to the library (and may be
de-allocated) when it is returned. The client must not
reference a returned event.
void evt_ret(cmi_event *event, cmi_event_ret status)
Return a notification event back to CMI library
Parameters:
event CMI event being returned
status CMI event completion status
CMI asynchronous notification events obtained via evt_get()
are returned back to the library with the specified
completion status. Some event types require processing the
event successfully. Any status other than CMI_EVENT_RET_DONE
for these events is fatal and the calling process can
terminate when returning the event.
Note:
The event token is allocated by the CMI library and
ownership is transferred to the CMI client. The event
token should remain valid till the client returns it by
invoking evt_ret().
int fini(cmi_ctxt *ctxt)
Disassociate the calling thread with a CMI context
Parameters:
ctxt CMI context to dis-associate with calling thread
Returns:
0 on success. -1 on failure.
CMI_ERR_INVAL Provided context is invalid
CMI_ERR_INIT Calling thread is not associated with
context
Disassociate calling thread with provided CMI context. The
library can de-allocate any thread specific resources. When
the last thread is disassociated with the context the
context can be deallocated. It is illegal for the calling
thread to invoke any CMI calls after a successful invocation
of fini(). A thread can re-associate itself with a CMI
context by invoking ini_th() and perform CMI transfers.
Note:
If the CMI context is de-allocated and was used to import segments
from remote contexts then the CMI platform can deliver a
CMI_EVENT_RCTXT_DOWN event to the remote contexts (if supported).
If the CMI context is de-allocated and exported segments that were
imported by some remote contexts then the CMI platform can deliver
a CMI_EVENT_HCTXT_DOWN event to the remote contexts (if supported).
Thread Safety: This call is multi thread safe. Multiple
threads may dis-associate themselves with a context concurrently.
int flush_fb(cmi_ctxt *ctxt, cmi_fb epoch)
Flush all stores since beginning of epoch to home node for
thread
Parameters:
ctxt CMI context
epoch CMI epoch to flush
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context or epoch is invalid
CMI_ERR_STORE An error occurred while flushing stores to
home node
CMI_ERR_HW Hardware error prevented flushing stores to
home node
Forces flushing (writing) of all stores back to the home
node for calling thread since last flush_fb or start of
epoch. Flushing stores is a synchronous operation. On return
from this call all stores between previous open_fb() or
flush_fb() have been reflected on the home node for the
segment.
If the platform encounters an un-recoverable error (such as
network failure) that results in possible data
corruption/loss then CMI_ERR_STORE error will be returned
to indicate failure to calling thread. An asynchronous event
may also be generated to the process that imported the
affected segment on the node. The event type to indicate
store failure should be CMI_EVENT_STORE_FAILURE.
Store failures due to loss of network connectivity or remote
node being down are distinguished from local hardware
failures by the CMI_ERR_HW error code. CMI clients may
perform different recovery operations based on the network
or remote node down vs. local hardware being faulty (for
instance shutdown the local node instead if evict the remote
node on store failures). An asynchronous event may also be
generated to the process that imported the affected segment
on the node. The event type to indicate local hardware
failure should be CMI_EVENT_HW with the segment and
addresses that failed the flush as event data.
int ini_th(cmi_ctxt *ctxt)
Associate a CMI context with the calling thread
Parameters:
ctxt CMI context to associate with calling thread
Returns:
0 on success. -1 on failure. Invoke cmi_get_error() to
retrieve error codes listed below.
CMI_ERR_NOMEM Out of resources (such as maximum number
of threads already associated for context)
CMI_ERR_INVAL Provided context is invalid
CMI_ERR_BOUND Calling thread is already associated with
context
This routine is invoked by all threads to
associate themselves with a previously allocated CMI context
(except the thread that initialized the CMI context itself
which automatically implies association). The vendor library
can allocate and associate any internal state with the
calling thread.
All threads in a process must associate themselves with a
CMI context successfully before any CMI calls can be made.
The only exception is a thread can call cmi_get_error() to
determine the reason why the associate call failed. For
every call to ini_th() a corresponding call to fini() will
be made by the thread to dis-associate itself from a
context.
Note:
Thread Safety: This call is multi thread safe.
Multiple threads may associate themselves with a context
concurrently.
int mb_fn(cmi_ctxt *ctxt)
Perform a full (Read/Write) memory barrier
Parameters:
ctxt CMI context
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with context
CMI_ERR_INVAL Provided context is invalid
Perform a 'Memory Barrier' that orders both loads and stores.
Loads and stores preceding the memory barrier are committed to memory
before any loads and stores following the memory barrier.
On platforms that perform natural ordering of loads and stores,
the implementation of this function can be a no-op or even a NULL
function pointer to indicate to client that full memory barriers are not
required on the platform.
Note:
A CMI platform also provides an implementation of the
full barrier using the defined cmi_mb() macro. Platform specific
implementations are made available in cmi_impl.h header file.
Clients may invoke memory barriers via function pointers or inline
the memory barrier macros with their code.
cmi_fb open_fb(cmi_ctxt *ctxt)
Open a flush epoch to start tracking stores by thread
Parameters:
ctxt CMI context
Returns:
log Handle to flush epoch or NULL on failure
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context is invalid
CMI_ERR_NOMEM Out of resources allocating epoch
CMI_ERR_BOUND Thread already has a flush epoch open
CMI_ERR_HW Underlying hardware detected on node
In order to aid in maintaining consistency in presence of
failures it is necessary to update home node memory at well
defined points. The CMI system needs to track stores on a
per thread basis. Tracking changes is meaningful only on
remote nodes. Starting to log changes on a home node can be
a no-op as hardware provided coherence on the node is
sufficient to guarantee consistency. It is illegal for a
thread to have more than one flush epoch. Epoch creation in
this case can fail (return NULL) and set the error to
CMI_ERR_BOUND. However multiple threads can be concurrently
operating on a segment with a separate epoch tracking
changes for each thread.
int rmb_fn(cmi_ctxt *ctxt)
Perform a read memory barrier
Parameters:
ctxt CMI context
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with context
CMI_ERR_INVAL Provided context is invalid
Perform a 'Read Memory Barrier' that only orders loads i.e. loads
preceding the memory barrier are completed before any loads following
the memory barrier (in program order). Stores can be re-ordered across
the barrier if underlying implementation supports it.
On platforms that perform natural ordering of loads the implementation
of this function can be a no-op or even a NULL function pointer to
indicate to client that read memory barriers are not required on the
platform.
Note:
A CMI platform also provides an implementation of the
read barrier using the defined cmi_rmb() macro. Platform specific
implementations are made available in cmi_impl.h header file.
Clients may invoke memory barriers via function pointers or inline
the memory barrier macros with their code.
int rseg_del(cmi_ctxt *ctxt, cmi_rseg *rseg)
Delete a previously allocated remote segment handle
Parameters:
ctxt CMI context
rseg CMI remote segment handle to deallocate
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context, or remote segment handle
is invalid
Delete a remote segment handle previously obtained via call
to seg_exp(). This routine does not disable access to
the remote segment - to disable access to the segment the
access tokens associated with the segment should be revoked.
Similarly this routine does not deallocate the underlying
CMI segment on the node - for that the caller should use
CMI_SEG_RM command to seg_ctl(). This routine deallocates
any resources used by the calling process when exporting
the segment. If the exported handle was transmitted over the
network to remote nodes it is still valid till the access
token or the segment itself is deallocated.
void* seg_at(cmi_ctxt *ctxt, cmi_seg seg, void *addr, int32_t flags)
Attach specified CMI segment to address space of calling
process
Parameters:
ctxt CMI context
seg CMI segment to attach to
addr Address to attach segment to
flags Attach modifier flags
Returns:
Address of the attached segment
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context or segment is invalid or
page is not properly aligned
CMI_ERR_NOMEM Out of resources to attach segment
CMI_ERR_PERM Insufficient permissions to attach segment
CMI_ERR_RECONFIG If segment is currently being
reconfigured
CMI_ERR_NOTSUPP Attach operation attributes not
supported
Attach the specified CMI segment to address space of a calling process.
The segment handle is obtained from a previous call to seg_get() or
imported using a call to seg_imp(). The address to attach in the
calling process is determined by the addr argument. The CMI library can
select a suitable unused address to map the segment to for an address
value of NULL. If the provided address is not NULL then it must be
naturally aligned to the alignment size for the segment as returned in
cmi_ctl() for CMI_CTL_INFO command. Attempt to attach to an address
that is not a multiple of the alignment size will fail with
CMI_ERR_INVAL error. Attach flags can be used to modify segment
attributes. Following attributes are currently defined:
o CMI_SEG_READ - Segment is attached for reading. Any stores
will generate an access error/exception.
Multiple segments can be attached within a process.
Additionally a given segment can be mapped into different
addresses within the process possibly using different
attributes (READ, READ|WRITE etc.). Any thread associated
with a context can request a segment attach. The attached
segment is visible to all threads within a process.
Note:
Thread Safety: This call is multi thread safe.
Multiple threads may request a segment to be attached
concurrently. CMI library may serialize segment attach
internally as this operation is expected to be
infrequent.
int seg_ctl(cmi_ctxt *ctxt, cmi_seg seg, int32_t cmd, cmi_ds *ds)
Perform control operations on a CMI segment
Parameters:
ctxt CMI context
seg CMI segment to perform operation on
cmd Operation to perform
ds Segment operand structure
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided segment is invalid
CMI_ERR_PERM Insufficient permissions for requested
operation
CMI_ERR_NOMEM Out of resources increasing segment break
point for extensible segments
Perform the specified operation on the CMI segment. The
cmi_ds operand is a value result structure allocated by the
client and filled in by the library for some of the
operations. Currently the following operations are defined:
o CMI_SEG_RM: Mark the specified segment for deletion. This
operation can be performed in the context of the process
that created the segment via seg_get() on the home node or
a process that imports a segment via seg_imp() on remote
node. Once the last process detaches from the segment the
backing resources can be reclaimed. See Segment Lifetime
in cmi(5) for more details.
o CMI_SEG_TOKEN: Configure the remote access token for
provided segment. Remote access tokens are configured on
nodes for remote segments imported via calls to seg_imp().
Access tokens for remote segments are configured by the
process that imported the segment. Access tokens for a
segment can be configured multiple times while the segment
is active (for instance during cluster reconfiguration
where new access tokens are generated). Configuring an
access token over-writes the previously configured access
token for the segment. A CMI_ERR_PERM error is generated
if attempting to configure a remote access token on a node
it was not generated for. This is only possible for node
specific tokens.
o CMI_SEG_STATUS: Fill in the segment operand structure for
the segment.
o CMI_SEG_CHECK: Check if specified segment address is in
need of recovery. This is invoked on the home node in the
context of the process that allocated the segment via
seg_get() on notification of death of a remote context
(CMI_EVENT_RCTXT_DOWN). Clients that already provide
guarantees of consistent data (CMI_CTL_CLIENT_CONSIST set
via cmi_ctl()) do not need to check for segments requiring
recovery as data consistency is guaranteed by client
provided protocols. Segment addresses must be checked in
multiples of CMI cache line size obtained via cmi_ctl()
using command CMI_CTL_INFO. If any addresses within the
range being checked require recovery then on return the ds
operands op.reco struct is filled with the address and
size of region requiring recovery. The client can then
re-issue the CMI_SEG_RECO command for entire/subset of the
affected region.
o CMI_SEG_RECO: Recovery specified segment address that may
be in an undetermined state across error boundaries (such
as when a remote node accessing the segment dies). This is
invoked on the home node in the context of the process
that allocated the segment via seg_get() on notification
of death of a remote context (CMI_EVENT_RCTXT_DOWN). Clients
that already provide guarantees of consistent data
(CMI_CTL_CLIENT_CONSIST set via cmi_ctl()) do not need to
explicitly recover segment ranges that may be in flux.
Segment addresses must be recovered in multiples of CMI
cache line size obtained via cmi_ctl() using command
CMI_CTL_INFO.
o CMI_SEG_BRK: Set the segment break point to the value
specified in brk.seg_brksz. Attempt to set the segment break
point beyond the maximal size of the segment specified
during creation in seg_get() shall return CMI_ERR_INVAL.
The configured break point size must be a multiple of
protection unit size on the platform, or else the resize
request will fail with CMI_ERR_INVAL.
If sufficient memory is not available to grow the segment
break point then CMI_ERR_NOMEM shall be returned. Memory
allocations for extensible segments should utilize any
configured memory reservation for the context. See Memory
Reservation in cmi(5) for more details.
Segment break point can only be changed for extensible
segments in the context of the process that created the
segment. Any attempt to modify segment break point for non-
extensible segments shall return CMI_ERR_INVAL.
CMI_ERR_PERM is generated if any process other than the
creator attempts to modify the segment break point. Segment
break point can be reduced i.e. memory freed up if the new
segment break point is smaller than the currently set break
point. A segment with 0 break point has no backing memory
allocated to it. See Extensible Segments in cmi(5) for more
details.
o CMI_SEG_INFO: Retrieve segment attributes into cmi_ds.info. This
command can be run on the home node for a locally allocated segment
as well as a remote node for an imported segment i.e. seg is an
imported segment obtained via seg_imp(). The attributes of the
exported and imported segments match. Some platforms may have
restrictions on address a segment can be attached at (due to page
size used for segment). The cmi_ds.info.seg_align field indicates the
alignment requirement for this segment and must match the alignment
restriction returned via cmi_ctl() for CMI_CTL_INFO command for this
segment type i.e. normal or large segment. Clients must specify an
address that is a multiple of this alignment constraint during
seg_at(). The size of the segment is returned in cmi_ds.info.seg_sz
along with the segment creation flags in cmi_ds.info.seg_flags.
int seg_dt(cmi_ctxt *ctxt, cmi_seg seg, void *addr)
Detach specified segment from address space of the calling
process
Parameters:
ctxt CMI context
seg CMI segment to detach
addr Address to detach
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context or segment is invalid
Detach specified segment from address space of the calling
process. The specified segment and address should have been
obtained from a previous call to seg_at(). Any thread
associated with a CMI context can request a segment detach.
On a successful return from seg_dt() the segment handle and
address space in the calling process are undefined.
Subsequent access by any thread in the process will result
in a segmentation fault or unpredictable operations. It's
the CMI clients responsibility to ensure that no threads are
executing or referencing objects within the detached
segment.
Note:
Thread Safety: This call is multi thread safe.
Multiple threads may request a segment to be detached
concurrently.
cmi_rseg* seg_exp(cmi_ctxt *ctxt, cmi_seg seg,
int32_t attrib)
Export a locally allocated segment
Parameters:
ctxt CMI context
seg CMI segment to export
attrib Attributes of exported segments
Returns:
Remote segment handle to CMI segment on success.
NULL on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context or segment is invalid
CMI_ERR_NOMEM Out of resources to export segment
CMI_ERR_PERM Insufficient permissions to export segment
CMI_ERR_NOTSUPP Segment attributes not supported
Export a segment so it can be accessed over the network. A
segment can only be exported by the process that allocates
it. The remote segment handle uniquely identifies the
segment on the cluster. Remote segment handles can be
distributed over the network. The size of the handle is
vendor specific and can be obtained using CMI_ATTR_RSEG_SIZE
attribute to attr_get(). The size of the remote segment is
at least sizeof(cmi_rseg) but can include additional opaque
elements that are library specific. The exported segment can
be accessed over the network via an access token. The
context that exports the segment can allocate access tokens
via tok_new(). Exported segments need to be imported on
remote nodes (via seg_imp()) before they can be accessed.
The segment parameter should be a locally allocated segment
obtained by a previous call to seg_get(). It is illegal to
obtain a remote segment handle for a segment obtained via
seg_imp(). Any attempt to do so can fail with error
CMI_ERR_INVAL. A segment can be exported multiple times with
potentially differing attributes however some attributes are
mutually exclusive. A process attempting to import a segment
that conflicts with an ongoing import will fail with
CMI_ERR_INVAL during seg_imp().
Clients can specify certain attributes for the exported
segment. These attributes can modify the behavior of remote
nodes in how they access the segment. Following attributes
are currently defined:
o CMI_SEG_WRITE_THROUGH - Segment is configured for write
through policy. All stores are un-cached and synchronous
and reflected on home node. Write through segments are
mutually exclusive to cacheable segments i.e. a segment
can't be imported both as a cacheable and un-cacheable
segment by same or different process. It is still
acceptable to export a segment as cacheable and un-
cacheable however at any given time it can only be
imported as either a cacheable or un-cacheable segment.
Note:
The size of the remote segment handle can vary depending on the
size (and attributes) of the segment being exported.
Attach Attributes: Some CMI system may not support all
segment attributes - for example if write through
segments are not supported by the underlying CMI
implementation then attempting to exporting segment with
this attribute can fail with error CMI_ERR_NOTSUPP.
cmi_seg seg_get(cmi_ctxt *ctxt, size_t size, int32_t
flags)
Allocate a CMI Segment of specified size
Parameters:
ctxt CMI context
size Size of segment to allocate
flags Segment allocation flags
Returns:
segment Handle to CMI segment on success.
CMI_SEG_INVALID on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context/operands are invalid
CMI_ERR_NOTSUPP Requested operation not supported
CMI_ERR_NOMEM Out of resources to allocate segment
CMI_ERR_PERM Insufficient permissions to allocate
segment
Allocate a CMI segment of specified size. The size of the
segment must be a multiple of the access protection unit
size on the platform. The size of the protection unit can
be queried via cmi_ctl() using CMI_CTL_INFO command.
Attempting to allocate a segment that is not a multiple of
the protection unit size will fail with CMI_ERR_INVAL.
Multiple segments can be created by a process.
Following flags are currently supported.
o CMI_SEG_NORESERVE: Do not reserve swap space for segment.
o CMI_SEG_LOCK: Prevent swapping of segment. Caller must
fault in pages that are to be locked explicitly.
o CMI_SEG_LARGE_PAGES: Use large pages to allocate segment. If
sufficient large pages are not available to satisfy the request then
the segment allocate operation should fail with error CMI_ERR_NOMEM.
Clients must ensure this segment is attached to an address that is a
multiple of ctl_cfg_seg_lrgpg_alignment size returned in
CMI_CTL_INFO.
o CMI_SEG_EXTENSIBLE: Create as extensible segment. Support for
extensible segments are optional. Platforms indicate support for
extensible segments by setting the CMI_CAP_EXTENSIBLE_SEGMENTS
capability flag. Any attempt to create an extensible segment on a
platform that does not support it shall fail with CMI_ERR_NOTSUPP.
For extensible segments the segment size is the maximum size of the
segment. On creation extensible segments have a size of 0 and can be
extended only by the creating context using seg_ctl() with command
CMI_SEG_BRK. See Extensible Segments in cmi(5) for more information.
o CMI_SEG_CLIENT_CONSIST: Create a segment in client consist mode
whereby the client is using application specific protocols to
guarantee consistency across failure boundaries. By default all
segments are created in client inconsistent mode (except if client
explicitly enabled client consist mode via cmi_ctl(). Clients can
request specific segments to operate in client consist mode if the
default is not changed. Platforms that do not support mixed mode
operation i.e. both consist and in-consistent segments together can
fail the operation with CMI_ERR_NOTSUPP error. See Consistent data
section in cmi(5) for more information.
o CMI_SEG_CLIENT_INCONSIST: Create a segment in client inconsistent
mode. This is the default mode of operation if the client has not
explicitly enabled client consist mode via cmi_ctl(). Data residing
in this segment is not guaranteed to be consistent across failure
boundaries by any application specific protocols. Any attempt to
access memory that was in a modified/cached state on a remote node
that died or is un-reachable will result in an access exception.
Clients must explicitly perform recovery of the affected regions
using seg_ctl() with command CMI_SEG_CHECK and CMI_SEG_RECO. See
Consistent data section in cmi(5) for more information.
Note:
If the client does not specify the consistency mode when creating a
segment (CMI_SEG_CLIENT_CONSIST or CMI_SEG_CLIENT_INCONSIST) then
the segment is created with the currently configured consistency
mode. The default consistency mode for CMI is in-consist mode.
Clients can change the default consistency mode via cmi_ctl() using
CMI_CTL_CLIENT_CONSIST mode macro. Once a client has changed the
default consistency mode to CONSIST it cannot be changed back to
in-consist however clients can still create segments with in-
consist mode by specifying the CMI_SEG_CLIENT_INCONSIST flag during
segment creation. If platforms do not support mixed mode segments
then all segment allocations with conflicting modes can fail with
CMI_ERR_NOTSUPP error.
CMI error handling requires that processes that create
segments be capable of processing asynchronous event
notifications using evt_get(). CMI library may generate
notification events targeted to the context to help in
recovery of CMI and program state for allocated segment
under some error conditions. See Error Handling in cmi(5) for more
information.
Some CMI systems may lock all pages backing a CMI
segment even if the client does not explicitly specify
CMI_SEG_LOCK. This mode of operation is acceptable
however it should not be a requirement that a client
specify the CMI_SEG_LOCK for correct operation i.e. the
segment locking can be done implicitly if required by
the underlying implementation.
Thread Safety: This call is multi thread safe.
Multiple threads may request a segment to be allocated
concurrently.
cmi_seg seg_imp(cmi_ctxt *ctxt, cmi_rseg *rseg)
Import a remotely allocated segment
Parameters:
ctxt CMI context
rseg CMI segment to import
Returns:
segment Handle to CMI segment on success.
CMI_SEG_INVALID on failure
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context or remote segment is
invalid
CMI_ERR_NOMEM Out of resources to imprt segment
CMI_ERR_PERM Insufficient permissions to allocate
segment
Import a remote segment so it can be manipulated by the
process (such as an attach or control operations). A remote
segment must be imported before it can be attached to. The
library allocates a segment handle to represent the remote
segment. This operation is only required on remote nodes as
it operates on remote segment handles. If operation is
executed on the home node where the segment identified by
rseg resides it should return a new 'remote' segment for the
import even though the exporting segment resides on the same
node. Access via this imported segment requires standard
access control mechanisms.
Note:
CMI error handling requires that processes that import
segments be capable of processing asynchronous event
notifications using evt_get(). CMI library may generate
notification events targeted to the context to help in
recovery of CMI and program state for imported segment
under some error conditions. See Error Handling in cmi(5)
for more information.
int tok_del(cmi_ctxt *ctxt, cmi_token *tok)
Delete/revoke an access token for a segment
Parameters:
ctxt CMI context
token Access token to delete/revoke
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context, or token is invalid
CMI_ERR_PERM Insufficient permissions to delete access
token
Remote access to segments can be revoked by deleting access
tokens for the segment. All subsequent accesses to the
segment with a revoked segment should result in access
error/exception on the requesting node. Access tokens
control access to segment from remote nodes. Standard OS
specific access control primitives are employed for local
access to the segment. This routine is only called on the
home node for the segment in the context of the process that
allocated the token using a call to tok_new().
For CMI implementations that utilize node specific tokens
this call only revokes the specified token associated with
the node specified during token generation. The remaining
tokens for the segment continue to be valid. A token must be
deleted for a node before a new one can be generated using
tok_new() for a given segment.
cmi_token* tok_new(cmi_ctxt *ctxt, cmi_seg seg,
cmi_naddr *naddr, cmi_acc flags)
Generate an access token for a segment
Parameters:
ctxt CMI context
seg CMI segment to generate access token
naddr CMI node address to generate access token for
flags Permissions to grant with access token
Returns:
Token on success. NULL on failure.
CMI_ERR_INIT Calling thread is not associated with
context
CMI_ERR_INVAL Provided context, segment or naddr is
invalid
CMI_ERR_PERM Insufficient permissions to generate access
token
CMI_ERR_NOMEM Out of resources allocating token
CMI_ERR_BOUND Access token already generated for remote
node
Access to segments from remote nodes are controlled via
access tokens. Remote nodes preforming operations on the
segment must provide an access token (set on remote node
using seg_ctl()). Access (both read and write) is only
allowed if the requesting access token matches the token
associated with the segment. Additionally the type of access
being performed should match the privileges granted for the
token (for example CMI_ACC_ATOMIC is required to issue a
atm_cas() operation for a remote segment). All access using
a mismatched token or privileges results in access
error/exception on the requesting node.
Access tokens are semi-opaque objects generated by the CMI
library. CMI tokens are generated on the home node for the
segment and can only be called by the process that created
the segment via call to seg_get(). Multiple access tokens
can be created for a segment to provide fine grained access
control to the segment across a number of nodes. Access
tokens are deleted/revoked via tok_del(). The following
access privileges are defined which must be specified when
creating an access token:
o CMI_ACC_WRITE: Token can be used to perform writes/stores
to the segment
o CMI_ACC_READ: Token can be used to perform loads to the
segment
o CMI_ACC_ATOMIC: Token can be used to perform atomic
operations on the segment.
Access tokens can be generated for a specific CMI node by
providing a CMI node address. The CMI node address for
remote node is exchanged using some out of bound mechanism
and uniquely identifies a CMI node on the fabric. A node
specific access token can only be utilized by the node it is
exported to. Attempting to set a node specific token on a
remote node to which it was not exported will result in a
CMI_ERR_PERM error. Only one access token per CMI segment
can be created for a remote node. Attempting to generate
multiple access tokens for a remote node before deallocating
a previous token for the segment should fail with error
CMI_ERR_BOUND. The CMI node address is available as part of
cmi_ctxt handle and exchanged between nodes using some out
of band mechanism.
Some CMI implementations may not support exporting access
tokens at node specific granularity i.e.
CMI_CAP_NODE_SPECIFIC_TOKEN is NOT set. CMI clients can
only generate node agnostic access tokens using tok_new()
by providing CMI_NADDR_ANY as the node address fields. Node
agnostic access tokens should never generate CMI_ERR_BOUND
error during token generation. On CMI implementations that
require node specific tokens attempting to generate node
agnostic tokens will fail with error CMI_ERR_INVAL.
int wmb_fn(cmi_ctxt *ctxt)
Perform a write memory barrier
Parameters:
ctxt CMI context
Returns:
0 on success. -1 on failure.
CMI_ERR_INIT Calling thread is not associated with context
CMI_ERR_INVAL Provided context is invalid
Perform a 'Write Memory Barrier' that only orders stores i.e. stores
preceding the memory barrier are committed to memory before any stores
following the memory barrier (in program order).
On platforms that perform natural ordering of stores (such as TSO
architectures) the implementation of this function can be a no-op or
even a NULL function pointer to indicate to client that write memory
barriers are not required on the platform.
Note:
A CMI platform also provides an implementation of the
write barrier using the defined cmi_wmb() macro. Platform specific
implementations are made available in cmi_impl.h header file.
Clients may invoke memory barriers via function pointers or inline
the memory barrier macros with their code.
See attributes(5) for descriptions of the following attributes:
|