NAME | SYNOPSIS | API RESTRICTIONS | FEATURES | DESCRIPTION | EXTENDED DESCRIPTION | Physical Communication Layer | Driver Classes | Bus Control Interface | Bus Communication DDI | Properties | Events | ATTRIBUTES | SEE ALSO
#include <ddi/buscom/buscom.h>
The function or functions documented here may not be used safely in all application contexts with all APIs provided in the ChorusOS 5.0 product.
See API(5FEA) for details.
DDI
Provides an API for the development of communication drivers on a multicomputing system interconnected over an I/O bus.
In order to provide a communication protocol over an I/O bus, ChorusOS implements a layered architecture which is composed of three major layers:
Physical (low) Layer (physical communication drivers).
Logical (middle) Layer (logical communication driver).
Protocol (high) Layer (protocols and pseudo devices).
The physical layer abstracts the bus architecture enabling the portability of all upper layers. The main task of the physical communication driver is to make shared memory resources accessible from any board within the communication domain. Typically, among all physical drivers running on a given CPU board, there is one driver which provides access to the board local memory (exported to the bus). All other physical drivers provide access to the remote memory (imported from the bus). Thus, the total number of physical drivers on a CPU board (visible for multiplexer) is always equal to the number of CPU boards communicating over the bus (or busses). Another task of the physical driver is to provide interrupt services allowing a cross interrupt to be sent from one CPU board to another.
The logical communication driver (multiplexer) uses services provided by the physical drivers (that is, shared memory and cross interrupts) in order to implement a low level communication protocol over the bus. Note that many various implementations of such a communication layer are possible. ChorusOS implements a quite basic communication protocol providing simplex (unidirectional) communication channels over the bus. Memory resources used by a channel are specified at channel creation time. This means that a channel creator specifies the size of the FIFO used for the frames transmission over the channel. A channel has a point to point topology allowing only one writer and one reader per channel. The channel also implements flow control, notifying the reader and writer about a channel state transition. The reader receives notification when the channel state is changed from empty to non empty. The writer is notified when the channel state is changed from full to non full. In order to take advantage of posted writes (usually supported by a bridge hardware), the channel buffer is located at the reader local memory. Thus, the channel transmitter initiates write transactions on the I/O bus which are asynchronously forwarded to another bus segment by the bridge hardware.
On top of the logical communication driver (multiplexer), other software layers may be implemented in order to provide a given (standard) communication protocol over the bus. For example, an ATM stack may use the multiplexer driver in order to create channels carrying AAL5 frames. Another example is an ethernet pseudo driver which may be implemented using multiplexer channels. Such a driver may then be used by the ChorusOS IP stack in order to provide the standard IP protocol over the bus.
This section describes the architecture and the DDI of the physical communication drivers. Note that the physical communication driver design is mainly designed for the PCI/cPCI bus architecture. However, most of design principals discussed here are also valid on other multi board busses (like the VME bus, for example). Obviously, the physical communication DDI has to be bus architecture independent in order to enable portability of upper layers involved in the bus communication protocol.
The architecture described here also addresses issues specific to hot-pluggable (hot-swappable) busses. On the other hand, this may also be considered as a more generic issue related to the dynamic configuration (or re-configuration) of the hardware/software that provides the communication protocol over the bus. The latter issue is still valid on non hot-pluggable buses because there are multiple interconnected CPU boards and each of them may be shutdown and re-started independently.
The architecture also addresses issues related to the communication over multiple bus segments (homogeneous or heterogeneous). This enables a single communication domain to be provided for multiple busses (sub-busses) connected together. The inter-bus connection is supported by the physical communication layer and it is transparent to the upper layer multiplexer.
This section introduces classes of physical communication drivers. Each driver instance falls into one of such classes. The driver instance class defines a driver instance role within the physical communication layer.
There are four main communication driver classes:
Host (xH).
Master (xM).
Slave (RS).
Nexus (NX).
The host and master classes are divided again by subclasses.
The host class is divided into two subclasses:
Global host (GH).
Local host (LH).
The master class is also divided into two subclasses:
Local master (LM).
Remote master (RM).
So, totally there are six driver classes: GH, LH, LM, RM, RS and NX. Note that in the rest of this document the main classes xH and xM are referenced to designate both sub classes, that is, LH/GH and LM/RM respectively.
The xH class designates a driver that manages the local memory. There is only one instance of the xH class driver on a CPU board. The xH driver provides the Local Bus Communication DDI to the multiplexer to access the local memory. In addition, the xH driver plays a central role in the relationships between different physical communication drivers running on the board. This is a point through which multiple bus segments (residing on the board or accessible from the board) are connected. Basically, the xH driver is a pseudo driver. It does not correspond to a particular device. Typically, such a driver is embedded in the host bus bridge driver.
Among all xH driver instances there is one GH driver playing a central role within the communication domain. Such a GH driver instance is unique in the communication domain. In fact, the GH role may be assigned arbitrary to any LH driver. On the other hand, because the GH driver role is critical, the GH driver is typically running on the host system processor (HSP).
The NX class designates a nexus bus communication driver. Such a driver is typically used on a local bus bridge (for example, DEC21154 on MCP750). The main role of the NX drivers is to provide a local connection between the xH and xM driver instances running on the same board. An NX driver is connected to the Bus Control DDI provided by its parent communication driver (xH or NX) and in turn it provides the Bus Control DDI for its child communication drivers (xM or NX). So, an NX driver does not provide any DDI to the multiplexer. It is an auxiliary driver used to connect xM drivers running on a board to the local xH driver via the Bus Control DDI.
The xM class designates a driver managing a bus-to-bus bridge which connects two CPU boards. The first letter of the class name specifies whether the driver instance is remote (R) or local (L) with respect to the device (managed by the driver instance) and the board on which the device resides. The driver class is local if the driver is running on the same board on which the device resides, otherwise the driver instance is remote.
For example, in the cPCI system consisting of one HSP (MCP750) board and one NSP (MCPN750) board, there are two driver instances for the NSP Intel/DEC21554 Drawbridge chip. One driver instance is running on HSP while other one is running on NSP. The driver instance running on HSP (and managing the primary bridge interface) is remote while the driver instance running on NSP (and managing the secondary bridge interface) is local. Note that the xM driver does not provide any DDIs to the multiplexer. It is an auxiliary driver. Its main role is to provide a Bus Control communication protocol with peer remote instance. The xM driver is a nexus. It enables access to the remote memory regions (exported by xH drivers) by creating "remote" device nodes representing xH drivers running on remote CPU boards. Such "remote" device nodes are in fact children of the xM device node. Physical communication drivers running on "remote" nodes are designated by the RS class.
The RS class driver provides the Remote Bus Communication DDI to the multiplexer driver allowing the (remote) memory exported (on the bus) to be accessed by an associated xH driver running on remote board. It also allows the multiplexer to send cross interrupts to the remote multiplexer.
Summarizing the discussion above, six physical communication driver classes exist within a communication domain. The LH class driver instance represents the local memory. Normally, there is only one LH driver instance per board. There is one LH driver instance which plays a central role in the communication domain. We say that such a driver belongs to the GH class. Basically, the GH class is an extension of the LH class. There is only one GH driver instance per the communication domain. The xM class driver represents a bus-to-bus bridge which connects two CPU boards in the communication domain. The LM driver instance is running locally, that is, on the board where the bridge resides. The RM driver instance is running remotely, that is, on the board which arbitrates the external (primary) bus. The NX class driver represents a local bus-to-bus bridge. Finally, the RS class driver represents a remote CPU board. More exactly, it represents an xH driver instance running on a remote CPU board. So, the number of RS driver instances running on a CPU board is equal to number of CPU boards (in the communication domain) minus one.
An xM driver provides the common bus driver interface to the child RS drivers. Basically, a RS driver is just a normal device driver running on top of a bus driver. Such a driver uses the parent bus operations in order to map both the device (bridge) registers and the (remote) memory exported on the bus. Note that the RS driver never receives interrupts and therefore never uses interrupt services provided by the parent driver. On the other hand, an RS driver sends cross interrupts to the associated remote board using the bus bridge interface registers. It is important to limit the RS driver to the common (bus architecture independent) DDI in order to enable communication between heterogeneous busses. For example, an RS driver for a PCI-to-PCI bridge may run on top of the VME bus. Actually, an RS driver works with a kind of remote (virtual) device. Such a device may have no bus specific attributes on the underlying bus. For example, the configuration header of a PCI-to-PCI bridge may not be visible (accessible) on a remote bus segment. Only bridge interface registers are typically accessible.
An xM driver is always a child of an NX or xH driver. The NX and xH drivers are bus drivers and therefore they provide the bus DDI to their child xM drivers (for example, PCI bus DDI). In addition, there is an extra interface between a parent NX/xH and child xM drivers that is related to the physical bus communication layer support. This interface is called the Bus Control interface.
The Bus Control interface provides specific support for the communication protocol across the busses. Basically, the Bus Control interface addresses the following two issues:
Communication domain configuration.
Cross interrupts delivery
The Bus Control interface is used by the xH and xM drivers to start RS driver instances according to the communication domain topology. The configuration interface is also used to update the physical layer configuration when the hardware configuration is changed due to a hot-plug insertion/removal (or a board shutdown/restart).
The Bus Control interface also allows an xH driver to receive cross interrupts sent from remote CPU boards. Basically, when an RS driver instance sends an interrupt to the associated xH driver instance running on a remote board, this interrupt is first received by the xM driver instance which connects the remote board to the communication domain. Using the Bus Control interface, the xM driver sends such an interrupt event upstream (to parent). In this way, interrupt events are propagated up to the xH driver.
Technically, the Bus Control interface is implemented as a separate DDI. An xH driver instance exports the Bus Control DDI via the device registry. An NX driver instance imports the Bus Control DDI provided by its parent (xH or NX) and in turn exports the Bus Control DDI downstream (to children). An xM driver instance imports the Bus Control DDI exported by its parent (NX or xH). Naturally, the device tree database is used by a child driver to detect (in the device registry) the Bus Control interface provided by its parent.
The LM/RM driver instances managing the same bridge device provide a communication path between two CPU boards within the communication domain. This communication path is composed of the following:
Data communication path.
Control communication path.
The LM/RM driver typically manages a non transparent bus bridge. This bridge is connected to the local (secondary) bus and to the remote (primary) bus. It consists of two devices: the remote device is accessible on the primary bus and the local device is accessible on the secondary bus. The LM instance in running locally and it manages the local (secondary) bridge interface. The RM instance is running remotely (on the board arbitrating the primary bus) and it manages the remote (primary) bridge interface.
So, the LM/RM driver dials with two bus segments (primary and secondary). Logically, the bridge device splits the communication domain into two parts: the primary and secondary communication domains. Initially, the LM/RM driver is responsible for configuring the bridge address translation logic in order to enable access from one bus segment to communication resources available on the opposite bus segment. These communication resources are, in fact, memory regions exported by all xH drivers running on CPU boards from the opposite communication domain and also interface registers of bus to bus bridges through which these CPU boards are connected to the communication domain. The xM driver is responsible for creating (and managing) RS driver instances in a given (primary or secondary) communication domain for each instance of xH driver running in the opposite communication domain.
The xM driver is also responsible to implement the control communication path in order to forward the Control Bus DDI calls from one side of the communication domain to another. The mechanism used by the xM driver to implement the control communication path is bridge specific and may vary from one driver to another. For example, an xM driver may use I2O FIFOs if the I2O messaging is supported by the underlying bus bridge. Otherwise, mail boxes or scratchpad registers may be used.
An RS driver instance represents a remote CPU board in the communication domain. The RS driver provides the Remote Bus Communication DDI to the multiplexer in order to access the remote board memory exported by the xH driver (running on this remote board) and to send cross interrupts to this xH driver. Interrupts received by the xH driver are then forwarded to the multiplexer.
Note that a cross interrupt cannot be sent directly to an xH driver. An xH driver does not correspond to a concrete (bridge) device but rather to a concrete local memory region. A cross interrupt should be sent to an xM driver first, and then it will be forwarded upstream (that is, up to the xH driver) using the Bus Control DDI. Therefore, the RS driver is bus bridge specific and it is, in fact, the third driver class (together with LM and RM classes) which must be developed for a given bus bridge device.
Note that RS drivers for the same CPU board (that is, for the same xH driver instance) are not necessary similar. Actually, a given RS driver type depends on the bus-to-bus bridge device through which this CPU board is connected to the communication subdomain in which the RS driver instance is running. For example, a given board may be connected to two (external) bus segments through two different (that is, software incompatible) bridge devices A and B. In this case, the communication domain is logically split on three subdomains:
The CPU board itself.
First communication subdomain connected to the bridge A.
Second communication subdomain connected to the bridge B
Obviously, the RS drivers (representing this CPU board) running in the first subdomain will be bridge A specific, while the RS drivers (representing this CPU board) running in the second sub domain will be bridge B specific.
This section provides a detailed description of the Bus Control interface. The section is divided into four subsections:
Overview.
Local Bus Control DDI.
Remote Bus Control Interface.
Properties.
The first subsection provides an overview of operations used in the Bus Control protocol to configure the communication domain.
The second subsection provides a detailed description of the Bus Control DDI used locally between xH, NX and xM communication drivers.
The third subsection addresses a message based interface used remotely between LM/RM peer driver instances managing the same bus bridge device.
The last subsection describes some generic properties specified by the Bus Control interface for communication device nodes.
The Bus Control interface is used by physical drivers for two purposes:
To propagate a cross interrupt received by an xM driver upstream in order to deliver it to the xH driver
To configure the communication domain physical drivers according to the communication hardware
The following basic operations are defined by the Bus Control interface in order to support the dynamic configuration of the communication domain:
Host declaration.
Site declaration.
Site insertion and connection.
Site removal and disconnection.
Most of the Bus Control operations may be considered as asynchronous broadcast messages sent to all xH, xM and NX communication drivers involved into the communication domain. Note that RS drivers are not concerned by the Bus Control protocol.
The following message propagation algorithm is used by the communication drivers in order to implement such a broadcast transmission. If a message is received from the parent, it is sent to all children. If a message is received from a child, it is sent to the parent and to all children except the message transmitter.
The remote Bus Control connection between peer LM and RM driver instances (managing the same bridge device) is always considered as being in the downstream direction from both sides. In other words, the LM driver instance is a child of the RM driver instance and vice versa.
The host declaration message (host_declare()) is issued by the GH driver instance. It is a broadcast message which is forwarded to all LH, NX and xM communication drivers using the propagation mechanism described above. The main purpose of this operation is to establish a route between the GH driver and each other (LH, NX, xM) communication driver in the domain. Actually, the host declaration operation is the first Bus Control action which occurs in the communication domain. This action is initiated by the GH driver instance. All LH drivers are waiting for an incoming host declaration message in order to start their own declaration process.
When a host declaration message is received by a driver, it memorizes the direction from which the message is received (host route direction). In addition, the driver forwards this message in all possible directions (upstream and downstream) except the message transmitter.
Note that, if a given driver instance already knows the host route (that is, it has received the host declaration message) and there is a new child driver which connects to this driver instance later on, the host declaration message is issued to the new driver instance at connection time.
The host declaration message also carries some useful information which is retained by the communication drivers and used later on in the Bus Control protocol:
Communication level.
Communication path.
The communication level is an integer number. It is initialized to zero by the GH driver and incremented each time the host declaration message is sent over a remote Bus Control interface between LM and RM drivers. Basically, the communication level specifies the host route length, that is, the distance between a given driver and the GH driver. This distance is measured in the number of CPU boards residing on the host route.
The communication path is a NULL terminated ASCII string designating a CPU board global path in the communication domain. Initially, the communication path is set to an empty string by the GH driver. The communication path is then updated each time the host declaration message is sent over a remote Bus Control interface between LM and RM drivers. An xM driver appends its local device path to the communication path string. Therefore, the communication path uniquely identifies a CPU board in the domain because a local device path is supposed to be locally unique.
Once a host declaration message is received by an LH driver instance, the LH driver issues a site declaration request (site_declare()). The site declaration operation allows an LH driver to obtain its unique site identifier within the communication domain. The site declaration request is treated by the GH driver instance. The GH driver is responsible for assigning a unique site identifier to each LH driver instance present within the communication domain.
When an LH driver is started, it waits until a host declaration down call is issued from one of its child drivers. This identifies the host route direction in which a site declaration request has to be sent to. When a site declaration request is received by an NX, xM or another LH driver, it is also forwarded toward in the host route direction. In this way, a site declaration request follows the host route until it finally reaches the GH driver.
Once the GH driver receives a site declaration request it replies to it with a site unique identifier and a site declaration sequence number (SDSQN). The site unique identifier is just an integer value which uniquely identifies the site in the communication domain. SDSQN is also an integer value which is actually a counter of declaration requests received by the GH driver. This counter is initialized to zero and incremented each time a declaration request is received by the GH driver. SDSQN is used in the configuration protocol as described later on. Basically, it provides the configuration process ordering.
Note that the site declaration request carries the LH driver communication path. In other words, the GH driver receives in the site_declare() request the unique site communication path assigned to the CPU board being declared. This information may be used by the GH driver in the site unique identifier assignment policy. For example, the GH driver may use the site communication path in order to assign the same site identifier to a CPU board which has been removed from the communication domain and then re-inserted again.
Unlike all other Bus Control operations, the site declaration operation is synchronous. This means that an LH driver that issues the site declaration request is blocked when waiting for the reply. The communication level is typically used by the xM communication drivers in order to tune a time-out value when waiting for the site_declare() reply message. Usually, the communication level is used as a multiplier for a basic time out value.
Once the site declaration is done, the LH driver notifies all child drivers about the site unique identifier assigned to the CPU board. This notification is done via a site_enable() up-call and propagated by child drivers downstream. Note that the site_enable() message is local and it never crosses the CPU board boundaries. In other words, the site_enable() message is never sent by an xM driver to the remote side. The site unique identifier is just memorized by xM and NX drivers to be used later on. So, the site_enable() message does not take a part in the remote Bus Control protocol implemented by an xM driver.
The next step in the LH driver initialization is a site insertion procedure. Such a procedure is initiated by an LH driver that sends a site insertion message downstream, that is, to all child drivers. Such a site insertion message consists of the following parts:
Site identifier.
Site communication path.
SDSQN.
Interface descriptor.
Mapping descriptor.
The site insertion message is a broadcast message and it is propagated to all communication drivers in the domain using the standard propagation mechanism described above.
The interface descriptor identifies the bus bridge device used for the communication protocol. This descriptor is empty initially, and it is set up when the site insertion message is sent the first time to a remote CPU board by an xM driver. The xM driver initializes the interface descriptor according to the underlying bus bridge hardware when a site insertion message with an empty interface descriptor is received from the remote side.
The mapping descriptor consists of two parts:
Interface mapping descriptor.
Memory mapping descriptor
The interface mapping descriptor specifies addresses on the current bus which the bridge interface registers are mapped to. Like the interface descriptor, the interface mapping descriptor is initially empty and it is set up together with the interface descriptor by an xM driver, when a site insertion message with an empty interface descriptor is received from the remote side.
The memory mapping descriptor specifies a memory region exported on the bus for the inter-bus communication purposes. It is initially set up by the LH driver initiating the site insertion message and it specifies addresses on the current bus which the local communication memory is mapped to.
Note that the interface descriptor is initialized only once and never updated after that. Unlike the interface descriptor, the mapping descriptor is updated each time the site insertion request is forwarded by a physical communication driver in order to take into account the bus bridge address translation logic. In general, the memory region may be located at different addresses on the primary and secondary busses that a given bus bridge is connected to. So, xM and NX drivers are responsible for keeping the interface and memory mapping addresses up to date (within the message) during the site insertion message propagation.
When a site insertion message is received by an xM or xH driver, the check is made whether the message should be processed or simply ignored. The message is ignored if the driver SDSQN is greater than the message SDSQN. This means that the site insertion message is ignored on a CPU board which has been declared after the CPU board initiated the message. In other words, the site insertion message issued by a CPU board is processed only on CPU boards which existed at the moment this CPU board was declared.
Together with the site insertion message propagation, xM and xH drivers take some specific actions. Actions taken by the driver are driver class specific and detailed below.
When an xM driver receives a site insertion message from the remote side, it creates a child device node that corresponds to the remote device and starts an appropriate RS driver instance on this node. The device characteristics (for example, the device and vendor IDs for a PCI device) are specified in the interface descriptor. The bus resources needed to access the remote memory region and bridge interface registers are also specified in the site insertion request (interface/memory mapping descriptors). When creating a remote device node, the xM driver has to specify the local and remote site identifiers as device properties. These properties are then used by the multiplexer upper layer driver. The local site identifier is provided to the xM driver by the LH driver running on the board via the site_enable() message. The remote site identifier is specified in the site insertion message. In this way the RS instance is created on a CPU board for a new (remote) LH driver instance running on another CPU board.
When an insertion message is received by an xH driver, the driver sends a site connection message (site_connect()) back to the site insertion initiator. The site connection message content is similar to the site insertion message content except that it contains an extra field that specifies the destination site identifier, that is, identifier of the site initiated the site insertion process. In other words, the site connection message includes both source and destination site identifiers. The source site identifier corresponds to the CPU board that sent the site connection message and the destination site identifier corresponds to the CPU board from which the site insertion request has been sent.
The site connection message is a broadcast message and it is propagated using the standard propagation mechanism described above. The interface and mapping descriptors are initialized and updated similarly. The purpose of the site connection message is to create an RS driver instance associated with a given (remote) xH driver on the newly inserted CPU board. So, when a site connection message is received by an xM driver from the remote side, the driver checks whether the destination site identifier matches the local site identifier. If the check is positive, a device node is created and an RS driver instance is started. Otherwise, the site connection message is simply propagated upstream, and no other action is taken. In this way, an RS driver instance associated with an existing xH driver is created on a newly inserted CPU board.
The CPU board removal mechanism is described below. Note that only a non surprise removal is considered here.
When a CPU board requests to be removed from the communication domain, a shutdown event is received by the xH driver running on the board. The shutdown event is propagated downstream using an event mechanism implemented by the ChorusOS driver framework. So, a shutdown event is received by xM driver instances running locally on the board. When an xM driver instance receives this event it initiates the site removal procedure described below.
The site removal procedure consists of sending a site removal message to the remote side through the remote Bus Control protocol. The removal message contains the site unique identifier of the CPU board being removed from the communication domain.
The site removal message is a broadcast message and it is propagated using the standard propagation mechanism described above.
When a site removal message is received by an xM driver from the remote side, the driver shuts down the child RS driver instance which matches the site identifier given in the message. So, the purpose of the site removal message is to shut down all RS driver instances associated with the xH driver that is being removed. Note that the site removal message should be propagated upstream (that is, to the parent) only when an appropriate RS driver instance is destroyed.
When an xH driver receives a site removal message, analogously to the site insertion process, it sends a site disconnection message (site_disconnect) back to the CPU board that initiated the site removal process. The site disconnection message is composed of two site identifiers: source and destination. The purpose of the site disconnection message is to destroy the RS driver instance (associated with this xH driver) that is running on the CPU board being removed. The site disconnection message is a broadcast message and it is propagated using the standard propagation mechanism described above. On the other hand, it is only taken into account by an xM driver running on the board matching the destination site identifier. On receiving such a site disconnection message, an xM driver shuts down an RS driver instance that matches the source site identifier.
When the last RS child instance goes away, the xM driver performs self shutdown and therefore closes connection to the parent communication driver. In such a way, the shutdown process is propagated upstream and terminated by the xH driver.
The local Bus Control DDI is provided by each xH and NX driver instance running on a CPU board.
The character string "buscom-ctl" (alias BUSCOM_CTL_CLASS) names the local Bus Control device class. A pointer to the BusComCtlOps structure is exported by the driver via the svDeviceRegister() microkernel call. A driver client invokes the svDeviceLookup() and svDeviceEntry() microkernel calls in order to obtain a pointer to the device service routines vector. Once the pointer is obtained, the driver client is able to invoke the driver service routines via indirect function calls.
The local Bus Control DDI is a multi client DDI. Multiple child communication drivers may reside on top of an xH or NX communication driver. So, the device registry allows for multiple lookups being done on the same driver instance.
All methods defined by the BusComCtlOps structure must be called in the DKI thread context.
typedef uint32_f BusComSize; typedef uint32_f BusComSite; typedef uint32_f BusComSeq; typedef uint32_f BusComLevel; typedef struct BusComCtlOps { BusComCtlVersion version; KnError (*open) (BusComCtlId id, BusComCtlUpCalls* upcalls, void* cookie, BusComCtlConnId* cid); void (*close) (BusComCtlConnId cid); KnError (*site_declare) (BusComCtlConnId cid, char* cpath, unsigned int cplen, BusComSite* site, BusComSeq* seq); void (*site_shutdown) (BusComCtlConnId cid); void (*site_insertion) (BusComCtlConnId cid, BusComSeq seq, BusComSite src, BusComDevice* dev, BusComMapping* map, char* cpath, unsigned int cplen); void (*site_removal) (BusComCtlConnId cid, BusComSite src); void (*site_connect) (BusComCtlConnId cid, BusComSite dst, BusComSite src, BusComDevice* dev, BusComMapping* map, char* cpath, unsigned int cplen)); void (*site_disconnect) (BusComCtlConnId cid, BusComSite dst, BusComSite src); void (*host_declare) (BusComCtlConnId cid, BusComLevel level, char* cpath, unsigned int cplen); } BusComCtlOps;
The version field specifies the maximum local Bus Control DDI version number supported by the driver.
The version number is incremented each time one of the local Bus Control DDI structures is extended in order to include new service routines. In other words, a new symbol is added to the BusComCtlVersion enumeration each time the API is extended in this way.
A driver client specifies a minimum DDI version number required by the client when calling svDeviceLookup(). The svDeviceLookup() routine does not allow a client to look up a driver instance if the DDI version number provided by the driver is less than the DDI version number required by the client.
A client that is aware of DDI extensions may still specify a minimum DDI version when looking for a device in the registry. Once a device is successfully found, the client may examine the version field in order to take advantage of extended DDI features which may be provided by the device driver.
The open() method is the first call a child must make to the parent driver. The open() call is used to establish a connection to the driver. It enables subsequent invocation of all other methods defined by the BusComCtlOps structure.
The id input argument specifies a given communication device driver instance. It is given by the device registry entry. The upcalls input argument specifies the child driver up-call methods. Because the Bus Control interface is bi-directional, there is a significant intersection between the BusComCtlUpCalls and BusComCtlOps structures. The only up-call specific methods are intr_attach(), intr_detach() and site_enable(). The only down-call specific methods are open(), close() and site_shutdown().
All methods specified by the BusComCtlUpCalls structure must be called in the DKI thread context.
typedef uint32_f BusComIntr; typedef struct BusComCtlUpCalls { KnError (*intr_attach) (void* cookie, BusComIntr intr, BusComIntrHandler intr_handler, void* intr_cookie, BusComIntrId* intr_id); void (*intr_detach) (BusComIntrId intr_id); void (*site_enable) (void* cookie, BusComSite site, BusComSeq seq); KnError (*site_declare) (void* cookie, char* cpath, unsigned int cplen, BusComSite* site, BusComSeq* seq); void (*site_insertion) (void* cookie, BusComSeq seq, BusComSite src, BusComDevice* dev, BusComMapping* map, char* cpath, unsigned int cplen); void (*site_removal) (void* cookie, BusComSite src); void (*site_connect) (void* cookie, BusComSite dst, BusComSite src, BusComDevice* dev, BusComMapping* map, char* cpath, unsigned int cplen); void (*site_disconnect) (void* cookie, BusComSite dst, BusComSite src); void (*host_declare) (void* cookie, BusComLevel level, char* cpath, unsigned int cplen); } BusComCtlUpCalls;
The cookie input argument is passed back to the client driver each time an up-call method is invoked.
Upon successful completion, the parent driver returns K_OK and passes back to the child the connection identifier via the cid output argument. The connection identifier must be passed back to the parent driver in subsequent invocations of all other methods defined by BusComCtlOps.
The K_ENOMEM error code is returned, if there are not enough memory resources to establish a connection. In this case, the cid output argument is not modified.
The close() method is used to close the connection with the parent driver. This call must be the last call made to the driver.
The cid input argument specifies a given connection to the communication driver instance. It is given by the open() routine.
The site_declare() operation is used to declare a new site connected to the communication domain. The site_declare() request is initiated by an LH driver instance running on the site being declared. Then, using the site_declare() up-calls and down-calls, the request is propagated by LH/NX/xM communication drivers up to the GH driver instance which handles the request and replies to it. The request always moves in the host route direction, established by the host_declare() operation. The replay moves in the opposite direction in order to return to the site_declare() initiator. The main purpose of the site_declare() operation is to assign a unique identifier to the site being declared. Actually, this is the first action (with respect to the Bus Control interface) taken by an LH driver at initialization time. Usually, an LH driver initiates the site_declare() operation as soon as one of its child drivers invokes the host_declare() down-call in order to specify the host route direction.
Note that, the site_declare() operation is synchronous. Therefore, the communication driver is blocked awaiting for the site_declare() reply. Actually, site_declare() is the only synchronous operation specified by the Bus Control interface. All other operations are actually asynchronous broadcast messages.
The first argument is down-call/up-call specific but, in both cases, it identifies a given child-to-parent connection. The cookie up-call argument is given by the child at open time. The cid down-call argument is returned to child by open().
All other arguments are identical for both down-calls and up-calls. Note that the cpath, cplen and site input arguments are set up by the initiator LH driver and they are never changed by the intermediate drivers that forward the site_declare() request. Similarly, the site and seq output arguments are set up by the GH driver and they are never changed by the intermediate drivers that forward the site_declare() reply.
The cpath and cplen input arguments specify the communication path of the site being declared. The communication path is a NULL terminated ASCII string which uniquely identifies the site within the communication domain. The communication path is given to the driver by the host_declare() call. Basically, it is a hint for the GH driver which may be used in the policy of site identifier assignment.
The site argument is both input and output. From one hand, the site_declare() initiator may specify a suggested identifier to be assigned to the site. For example, a geographical slot number may be used on the cPCI bus as a suggested site identifier. On the other hand, the site argument is set up by the GH driver to the unique identifier really assigned to this site. Note that the GH driver may not satisfy the LH driver suggestion. This typically happens when the suggested site identifier is already assigned to another site. The suggested site identifier must be in the range 1 to 0xffffffff. If the site_declare() initiator has no specific suggestion on the site identifier, the site argument must be set to 0 (BUSCOM_SITE_INVALID).
The seq output argument specifies the site declaration sequence number (SDSQN). The SDSQN value should be retained by the site_declare() initiator in order to be used later on. Basically, because the site declaration operation is synchronous, SDSQN provides a kind of ordering for all future asynchronous actions taken by communication drivers at initialization time.
Upon successful completion, the site_declare() routine returns K_OK and passes back to the caller, the assigned site identifier and SDSQN.
The K_EINVAL error code is returned, if the communication driver does not reside on the host route.
The K_ENOMEM error code is returned, if there are not enough memory resources to process the site declaration request.
The K_ETIMEOUT error code is returned, if a time out has occurred when waiting for a reply from a remote site.
The K_EFULL error code is returned, if there are no more available site identifiers in the domain.
In case of error, the site and seq output arguments are not modified.
The site_insertion() operation is used to establish forward connections between a newly declared site and all other sites existing within the communication domain. The site_insertion() message is initiated by the LH driver instance running on a newly declared site. Then, using the site_insertion() up-calls and down-calls, the request is broadcasted by xH/NX/xM communication drivers across the communication domain. The main purpose of the site_insertion() operation is to create, on each site, a RS driver instance representing the LH driver instance that initiated this site_insertion() message.
This allows a remote site to access local memory exported by this LH driver and to send cross interrupts to it. In such a way the site_insertion() initiator establishes connection to each other site within the communication domain. In addition, receiving a site_insertion() message, an xH driver initiates a site_connect() message toward the site_insertion() initiator.
The main purpose of the site_connect() operation is to create, on the site_insertion() initiator site, an RS driver instance representing the LH driver instance that initiated this site_connect() message. This allows the site_insertion() initiator site to access local memory exported by this LH driver and to send cross interrupts to it. In such a way a connection is established to the site_insertion() initiator from each other site within the communication domain. So, the site_connect() message is an xH driver reply on an incoming site_insertion() message. Usually, the site_insertion() message is initiated by an LH driver at initialization time once the site declaration operation is successfully completed.
The first argument is down-call/up-call specific but, in both cases, it identifies a given child-to-parent connection. The cookie up call argument is given by the child at open time. The cid down call argument is returned to child by open.
All other arguments are identical for both down-calls and up-calls.
The seq argument specifies the SDSQN of the site_insertion() initiator. It is given to the driver by the site_declare() operation. This argument is set up by the site_insertion() initiator and it is never changed by the intermediate communication drivers that forward the message. Note that when an incoming site_insertion() message is received by a communication driver, it must compare the seq argument value with the SDSQN assigned to the driver. In case the driver SDSQN is greater than the seq value, the site_insertion() message must be ignored. This means that the site_insertion() message is not processed on a site which has been declared later than the site_insertion() initiator. Indeed, such a site will initiate its own site_insertion() operation which will then be processed on the site from which this (ignored) site_insertion() message has been sent.
The src argument specifies the site_insertion() initiator identifier. It is given to the driver by the site_declare() operation. This argument is set up by the site_insertion() initiator and it is never changed by the intermediate communication drivers that forward the message.
The cpath and cplen arguments specify the communication path of the site_insertion() initiator. The communication path is a NULL terminated ASCII string which uniquely identifies the site within the communication domain. The communication path is given to the driver by the host_declare() operation. These arguments are set up by the site_insertion() initiator and they are never changed by the intermediate communication drivers that forward the message.
The dev argument specifies the bus bridge device (interface) which connects the site_insertion() initiator to the communication domain. Note that this argument is set to NULL by the LH driver that initiated the site_insertion() operation, because the LH driver cannot identify the interfaces through which the site is connected to the domain. The bus bridge device is identified when the site_insertion() message leaves the site_insertion() initiator site, in order to go to a remote site. Therefore, when an xM driver receives a site_insertion() message from the remote site with a NULL interface descriptor, it sets up the descriptor according to the underlying bus bridge device hardware. Once the interface descriptor is set up, it is never changed by the intermediate communication drivers that forward the message.
typedef uint32_f BusComDevType; /* bridge device architecture */ #define BUSCOM_CTL_DEV_UNKNOWN 0 /* unknown bridge architecture */ #define BUSCOM_CTL_DEV_VME 1 /* VME bridge */ #define BUSCOM_CTL_DEV_PCI 2 /* PCI bridge */ typedef struct BusComVmeDevInfo { /* TBD */ } BusComVmeDevInfo; typedef struct BusComPciDevInfo { uint16_f ven_id; /* PCI vendor ID */ uint16_f dev_id; /* PCI device ID */ uint32_f primary; /* primary/secondary interface */ } BusComPciDevInfo; typedef union { BusComVmeDevInfo vme; /* VME device */ BusComPciDevInfo pci; /* PCI device */ } BusComDevInfo; typedef struct BusComDevice { BusComDevType type; /* bridge type (PCI, VME, ...) */ BusComDevInfo info; /* bridge info */ } BusComDevice;
The interface descriptor is used by an xM driver in order to identify the RS driver type which should be launched to communicate with the site_insertion() initiator. Basically, when an xM driver creates an RS device node, it attaches the interface descriptor to the node as the "dev-info" property (alias BUSCOM_RS_PROP_DEV_INFO). This allows an RS driver to examine such a property at bind time in order to detect whether the bridge hardware is supported by the RS driver.
The map argument specifies addresses on the current bus segment which the exported memory (memory mapping) and bus bridge registers (interface mapping) are mapped to.
typedef uint32_f BusComBusType; #define BUSCOM_CTL_BUS_UNKNOWN 0 /* unknown bus architecture */ #define BUSCOM_CTL_BUS_VME 1 /* VME 32-bit bus */ #define BUSCOM_CTL_BUS_VME 64 2 /* VME 64-bit bus */ #define BUSCOM_CTL_BUS_PCI 3 /* PCI 32-bit bus */ #define BUSCOM_CTL_BUS_PCI64 4 /* PCI 64 bit bus */ typedef struct BusComVmeBusInfo { /* TBD */ } BusComVmeBusInfo; typedef struct BusComVme64BusInfo { /* TBD */ } BusComVme64BusInfo; typedef struct BusComPciBusInfo { PciIoSpace reg_space; /* PCI space where the bridge CSR are mapped*/ PciAddr reg_base; /* CSR base address */ PciSize reg_size; /* CSR size */ PciAddr mem_base; /* memory region base address */ PciSize mem_size; /* memory region size */ } BusComPciBusInfo; typedef struct BusComPci64BusInfo { PciIoSpace reg_space; /* PCI space where the bridge CSR are mapped*/ Pci64Addr reg_base; /* CSR base address */ Pci64Size reg_size; /* CSR size */ Pci64Addr mem_base; /* memory region base address */ Pci64Size mem_size; /* memory region size */ } BusComPci64BusInfo; typedef union { BusComVmeBusInfo vme; /* VME (32-bit) */ BusComVme64BusInfo vme64; /* VME (64-bit) */ BusComPciBusInfo pci; /* PCI (32-bit) */ BusComPci64BusInfo pci64; /* PCI (64-bit) */ } BusComBusInfo; typedef struct BusComMapping { BusComBusType type; /* current bus architecture */ BusComBusInfo info; /* bridge mapping info */ } BusComMapping;
Obviously, the interface mapping is invalid if the interface descriptor is NULL. So, the site_insertion() initiator sets up the memory mapping only. The interface mapping is set up by an xM driver together with the interface descriptor. Note that the mapping descriptor must be updated by any intermediate communication driver forwarding the message, in order to take into account the underlying bus bridge translation logic. In general, a region may be located at different addresses on the primary and secondary busses that a given bus bridge is connected to. So, xM and NX communication drivers are responsible for keeping the interface and mapping descriptors up to date during the site_insertion() message propagation.
The mapping descriptor is used by an xM driver in order to specify bus resources for an RS driver launched to communicate with the site_insertion() initiator. Basically, when an xM driver creates an RS device node, it attaches the "io-regs" and "mem-rgn" properties to the node. The properties values (that is, space, base address, size) are set up according to the mapping descriptor of the site_insertion() message.
The site_connect() operation is used to establish a backward connection to the site_insertion() initiator. Basically, the site_connect() message is the reply of an xH driver instance to an incoming site_insertion() message. Like all other bus communication operation (except site_declare()), site_connect() is an asynchronous broadcast message. It is initiated by an xH driver instance and then propagated by xH/NX/xM communication drivers across the communication domain using the site_connect() up-calls and down-calls.
The site_connect() arguments are similar to the site_insertion() ones except the dst extra argument which designates the destination site, that is, the site initiated the site_insertion() message. Despite the broadcast nature of the site_connect() message, it is only processed on the destination site. Therefore, receiving an incoming site_connect() message, a communication driver checks whether the dst site matches the driver local site. In case of mismatch, the message is simply forwarded to parent/child drivers using the propagation mechanism described above, and no other action is taken. Note however, that a communication driver that forwards a site_connect() message must update the interface (if needed) and mapping descriptors in the same way as for the site_insertion() message.
An incoming site_connect() message is only taken into account by an xM driver instance when it is received from the remote site and the destination site matches the local site. In this case, analogous to the site_insertion() operation, the xM driver creates a child device node and launches an RS driver instance on this node. Unlike the site_insertion() operation, xH communication drivers does not reply to the site_connect() message.
The site_removal() operation is used to close forward connections to a given site from all other sites within the communication domain. Like all other bus communication operations (except site_declare()), site_removal() is an asynchronous broadcast message. It is initiated by an xM driver instance and then propagated by xH/NX/xM communication drivers across the communication domain using the site_removal() up-calls and down-calls.
Usually, the site_removal() operation is used at site shutdown time. Note that the site shutdown is always initiated by the LH driver instance which sends a shutdown event to all child drivers. Such a shutdown event is propagated downstream by communication drivers running on this board using the standard driver framework mechanism. Finally, such a shutdown event is received by an xM driver instance which sends a site_removal() message to its remote peer xM driver instance.
The first argument is down-call/up-call specific but, in both cases, it identifies a given child-to-parent connection. The cookie up call argument is given by the child at open time. The cid down call argument is returned to the child by open.
The src argument specifies the site identifier of the site_removal() initiator. When an xM driver receives an incoming site_removal() message from the remote site, it must shut down an RS driver instance that matches the src site identifier and must delete associated device node. Only following this, the driver can forward the site_removal() message upstream.
When an xH driver receives an incoming site_removal() message, analogous to the site_insertion() operation, it replies with a site_disconnect message sent back to the site_removal() initiator. The main purpose of the site_disconnect operation is to destroy the RS driver instance running on the site_removal() initiator board, that represents this xH driver instance.
The site_disconnect() operation is used to close a backward connection to the site_removal() initiator. Basically, the site_disconnect() message is a reply of an xH driver instance on an incoming site_removal() message. Like all other bus communication operations (except site_declare()), site_disconnect() is an asynchronous broadcast message. It is initiated by an xH driver instance and then propagated by xH/NX/xM communication drivers across the communication domain using the site_disconnect() up-calls and down-calls.
The site_disconnect() arguments are similar to the site_removal() ones except for the dst extra argument which designates the destination site, that is, the site initiated the site_removal() message. Despite the broadcast nature of the site_disconnect() message, it is only processed on the destination site. So, on receiving an incoming site_disconnect() message, a communication driver checks whether the dst site matches the driver local site. In the case of a mismatch, the message is simply forwarded to parent/child drivers using the propagation mechanism described above and no other action is taken.
An incoming site_disconnect() message is only taken into account by an xM driver instance when it is received from the remote site and the destination site matches the local site. In this case, the xM driver must shut down a RS driver instance that matches the src site identifier and must delete the associated device node. When the last RS child driver goes away, the xM driver performs self shutdown and closes connection to the parent communication driver. In this way, the shutdown process is propagated upstream, and finally terminated by the LH driver instance.
The host_declare() operation is used to establish a host route between the GH driver instance and all LH driver instances within the communication domain. The host route is used to implement the site_declare() operation. Like all other bus communication operations (except site_declare()), host_declare() is an asynchronous broadcast message. It is initiated by the GH driver instance and then propagated by xH/NX/xM communication drivers across the communication domain using the host_declare() up-calls and down-calls.
Basically, host_declare() is the first operation (with respect to the Bus Control interface) made by the GH driver instance at initialization time. Note that the GH driver neither initiates site_declare() nor site_insertion() operations. Indeed, the site_declare() operation is always processed by the GH driver. Therefore, the GH driver is able to assign a site identifier to itself. Naturally, the SDSQN is always set to zero for the GH driver. The GH driver then increments SDSQN each time it processes a new site_declare() request. So, the GH driver has the minimal SDSQN in the domain and therefore it is useless to send the GH site_insertion() message because such a message will be ignored by all communication drivers. Instead, the GH driver replies with a site_connect() message on each incoming site_insertion() message.
The first argument of host_declare() is down-call/up-call specific but, in both cases, it identifies a given child-to-parent connection. The cookie up-call argument is given by the child at open time. The cid down-call argument is returned to the child by open.
All other arguments are identical for both down-calls and up-calls. The level argument specifies a distance between the GH driver and a given communication driver. Such a distance is measured in number of sites (that is, CPU boards). The GH driver initially sets the level to zero. The level is incremented each time the host_declare() message is forwarded by an xM driver instance to its remote peer xM partner. The communication level is a hint for a communication driver. It might be used, for example, in order to tune a time out period used by an xM driver to wait for the site_declare() reply from the remote site.
The cpath and cplen arguments specify the current communication path. The communication path is a NULL terminated ASCII string which uniquely identifies the site within the communication domain. Such a path is dynamically constructed by xM communication drivers during the host_declare() message propagation process. The communication path is initially set to an empty string by the GH driver. Then, the local path of the underlying bus bridge device is appended to the string each time the host_declare() message is forwarded by an xM driver instance to its remote peer xM partner. Therefore, the communication path uniquely identifies a site within the domain because a local device path is assumed to be locally unique.
Note that any communication driver must support a deferred propagation of the host_declare() message. This means that once the host_declare() message is received by a communication driver, it must retain all needed information in order to be able to re-send this message later on (that is, in a deferred way). Such a deferred host_declare() message re-send must take place each time a new child communication driver is connected (locally or remotely) to the driver.
The site_shutdown() down-call is used to notify the LH driver of a site shutdown request, detected by a communication driver. Typically, a shutdown request may be initially received by an LM driver instance from its remote peer RM partner. For example, it may be a board removal request detected on the cPCI bus by a RM driver instance (running on the system controller board) and transmitted (through the remote Bus Control interface) to the peer LM partner. On receiving such a remote shutdown request, the LM driver notifies its parent communication driver by the invocation of the site_shutdown() routine. In the same way, the parent driver notifies its parent and so on. Finally, the shutdown request reaches the LH driver instance which initiates the site shutdown process as described above (see site_removal()).
The cid input argument specifies a given connection to the driver. It is returned by open().
The site_enable() up-call is used to put local communication drivers in a fully operational state. The site_enable() operation is initiated by the xH driver once the site_declare() operation is successfully completed. Then, the site_enable() operation is propagated downstream by NX communication drivers up to the leave xM communication drivers. Note that this operation is local and it does not take a part in the remote Bus Control protocol. On receiving a site_enable() call, the xM driver becomes fully operational. Now, the driver is able to process site_insertion() and site_connect() messages. The site_enable() call specifies, to the driver, the unique identifier and SDSQN assigned to the local site.
The cookie argument specifies a given connection to the driver. It is given by child at open time.
The site argument specifies the unique identifier assigned to the local site.
The seq argument specifies SDSQN assigned to the local site.
The site and seq arguments are set up by the xH driver and they are never changed by the intermediate NX communication drivers that forward the site_enable() call downstream.
In an analogous way to the host_declare() deferred propagation, xH and NX communication drivers must support the site_enable() deferred propagation mechanism. In case a new child driver is connected to a fully operational (that is, enabled) xH or NX driver instance, the driver must immediately issue the site_enable() call to this child driver in order to put it in a fully operational state.
The intr_attach() method connects a given handler to a given (virtual) cross interrupt source.
The cookie input argument specifies a given child communication driver. It is provided at open time.
The intr input argument is an integer value that specifies a given (virtual) cross interrupt source. Note that if a given interrupt number exceeds the number of physical cross interrupts supported by the hardware, the interrupt handler is connected to the last available physical cross interrupt. Note also that multiple handlers may be attached to the same interrupt source.
The intr_handler input argument specifies an interrupt handler invoked by the child communication driver when a cross interrupt is received.
The intr_cookie input argument specifies a cookie that is passed back to the interrupt handler.
typedef BusComIntrStatus (*BusComIntrHandler) (void* cookie);
Upon successful completion, the child driver returns K_OK and passes back to the parent the interrupt identifier through the intr_id output argument. The intr_id argument is opaque for the parent. It must be passed back to the child driver as an argument in a subsequent invocation of the intr_detach() routine. The K_ENOMEM error code is returned, if the system is out of memory resources. In case of error, the intr_id output argument is not modified.
When the interrupt handler is invoked, the parent driver prevents re-entry to the interrupt handler.
An interrupt handler must return a value specified by the BusComIntrStatus type:
typedef enum { BUSCOM_INTR_UNCLAIMED = 0, BUSCOM_INTR_CLAIMED } BusComIntrStatus;
An interrupt handler must return BUSCOM_INTR_UNCLAIMED if the interrupt is unclaimed, that is, there is no useful work done in the interrupt handler.
An interrupt handler must return BUSCOM_INTR_CLAIMED if the interrupt has been claimed, that is, there was a useful work done in the interrupt handler.
The intr_detach() up-call disconnects the interrupt handler previously connected by intr_attach().
The intr_id input argument specifies the interrupt handler being disconnected. It is returned by intr_attach().
This section describes a message based interface used to provide a remote Bus Control communication between peer LM and RM driver instances running on different CPU boards but managing the same bus-to-bus bridge device.
The messages described below are basically equivalent to the local Bus Control interface defined in the previous section. In fact, these messages are just used to make the Bus Control DDI distributed across the communication domain.
Note that this section does not specify which hardware mechanism should be used to transfer a message from one site to another. Such a mechanism is LM/RM driver implementation specific and generally depends on the underlying hardware. For example, if a bridge supports I2O messaging, I2O FIFOs may be used for the Bus Control message transfer. Otherwise, scratchpad registers may be used for this purpose.
Each message has a standard header defined by the BusComMsg structure.
typedef struct BusComMsg { BusComSize size; /* total message size (including header) */ BusComMsgType type; /* message type */ } BusComMsg; typedef uint32_f BusComMsgType; #define BUSCOM_MSG_UNKNOWN 0 #define BUSCOM_MSG_SITE_DECLARE 1 /* site_declare() */ #define BUSCOM_MSG_SITE_SHUTDOWN 2 /* site_shutdown() */ #define BUSCOM_MSG_SITE_INSERTION 3 /* site_insertion() */ #define BUSCOM_MSG_SITE_REMOVAL 4 /* site_removal() */ #define BUSCOM_MSG_SITE_CONNECT 5 /* site_connect() */ #define BUSCOM_MSG_SITE_DISCONNECT 6 /* site_disconnect() */ #define BUSCOM_MSG_HOST_DECLARE 7 /* host_declare() */ #define BUSCOM_MSG_SITE_DECLARE_ACK 8 /* site_declare() ack */ #defineBUSCOM_MSG_SITE_SHUTDOWN_ACK 9 /* site_shutdown() ack */
The size field specifies the message size including the BusComMsg header.
The type field specifies the message type as listed above. Receiving an incoming message the communication driver should cast it to an appropriate message structure according to the message type. Message specific structures are described in the rest of the document.
The BusComMsg_site_declare structure defines the site_declare request layout. Note that because the site_declare operation is synchronous, the Bus Control interface also specifies the BusComMsg_site_declare_ack structure which defines the site_declare reply layout.
typedef uint32_f BusComMsgToken; typedef struct BusComMsg_site_declare { BusComMsg header; /* generic message header */ BusComMsgToken token; /* token */ BusComSite site; /* requested site UID */ char path; /* communication path in the domain */ } BusComMsg_site_declare; typedef struct BusComMsg_site_declare_ack { BusComMsg header; /* generic message header */ BusComMsgToken token; /* token specified in site_declare */ KnError res; /* call result */ BusComSite site; /* assigned site UID */ BusComSeq seq; /* assigned declaration sequence number */ } BusComMsg_site_declare_ack;
The token field is used to associate the site_declare acknowledgment received from a remote site to the site_declare request issued by the local site. The token is set up by the local site when a site_declare request is sent. A remote site copies the token to the site_declare acknowledgment message when replying to the site_declare request. Basically, the token allows a communication driver to implement the site_declare synchronous call via two asynchronous messages: the site_declare request and site_declare acknowledgment.
The site field of the site_declare request specifies a suggested site unique identifier.
The path field specifies the start location of the communication path. The path size must be calculated using the total message size given by the message header.
The res field of the site_declare reply specifies the site_declare operation result. If the operation is failed, that is, the res value is not K_OK, the site and seq fields are meaningless.
The site field of the site_declare reply specifies the site unique identifier assigned to the site_declare initiator.
The seq field of the site_declare reply specifies SDSQN that is assigned to the site_declare initiator.
typedef struct BusComMsg_site_insertion { BusComMsg header; /* generic message header */ BusComSeq seq; /* inserted site declaration sequence number*/ BusComSite src; /* inserted site UID */ BusComDevice dev; /* bridge device descriptor */ BusComMapping map; /* bridge mapping descriptor */ char path; /* inserted site communication path */ } BusComMsg_site_insertion;
The seq field specifies SDSQN assigned to the site_insertion initiator.
The src field specifies the unique site identifier assigned to the site_insertion initiator.
The dev field specifies the interface (bus bridge device) descriptor.
The map field specifies the interface and memory mappings on the current bus segment.
The path field specifies the start location of the communication path. The path size has to be calculated using the total message size given by the message header.
typedef struct BusComMsg_site_connect { BusComMsg header; /* generic message header */ BusComSite dst; /* destination (inserted) site UID */ BusComSite src; /* source (connectiong) site UID */ BusComDevice dev; /* source bridge device descriptor */ BusComMapping map; /* source bridge mapping descriptor */ char path; /* source site communication path */ } BusComMsg_site_connect;
The dst field specifies the unique identifier of the destination site, that is, the site_insertion initiator.
The src field specifies the unique identifier of the source site, that is, the site_connect initiator.
The dev field specifies the interface (bus bridge device) descriptor.
The map field specifies the interface and memory mappings on the current bus segment.
The path field specifies the start location of the communication path. The path size must be calculated using the total message size given by the message header.
typedef struct BusComMsg_site_removal { BusComMsg header; /* generic message header */ BusComSite src; /* removed site UID */ } BusComMsg_site_removal;
The src field specifies the unique site identifier of the site_removal initiator.
typedef struct BusComMsg_site_disconnect { BusComMsg header; /* generic message header */ BusComSite dst; /* destination (removed) site UID */ BusComSite src; /* source (disconnecting) site UID */ } BusComMsg_site_disconnect;
The dst field specifies the unique identifier of the destination site, that is, the site_removal initiator.
The src field specifies the unique identifier of the source site, that is, the site_disconnect initiator.
typedef struct BusComMsg_host_declare { BusComMsg header; /* generic message header */ BusComLevel level; /* local site communication level */ char path; /* local site communication path */ } BusComMsg_host_declare;
The level field specifies the current communication level.
The path field specifies the start location of the current communication path. The path size has to be calculated using the total message size given by the message header.
The BusComMsg_site_shutdown structure defines the site_shutdown request layout. Such a message is sent to the remote side in order to request the site shutdown.
typedef struct BusComMsg_site_shutdown { BusComMsg header; /* generic message header */ } BusComMsg_site_shutdown;
The BusComMsg_site_shutdown_ack structure defines the site_shutdown acknowledgment layout. Such a message is sent to the site_shutdown initiator in order to notify him that the site shoutdown process has been entered in a final phase.
typedef struct BusComMsg_site_shutdown_ack { BusComMsg header; /* generic message header */ } BusComMsg_site_shutdown_ack;
For example, on the cPCI bus, a hot-swap removal event is received by a RM driver instance running on the system controller board. Receiving such an event, the RM driver sends the site_shutdown request to the peer LM driver instance. Receiving such a message, the LM driver initiates the board shutdown process. At final phase of the board shutdown, the LM driver sends back to the peer RM driver instance the site_shutdown acknowledgment message.
This section specifies some generic properties related to a device tree node representing a bus communication device. The section is divided onto two subsections which address the xH and GH specific properties respectively. Note that xH device node is typically created statically in the device tree. Therefore, a system administrator is typically responsible for configuring xH drivers via device node properties.
The "site" property (alias BUSCOM_LH_PROP_SITE) specifies a suggested site identifier to be assigned to an xH driver. The property value type is BusComSite. This property is optional.
An LH driver uses the property value in the site_declare() operation in order to specify a suggested site identifier. If property is not present, the BUSCOM_SITE_INVALID constant (0x0) is used in the site_declare() operation. This means that the driver has no suggestion for the site identifier.
A GH driver uses the property value as the unique identifier assigned to the local site. If property is not present or the property value is invalid, a minimal site identifier assigned to the domain is used.
The "mem-size" property (alias BUSCOM_LH_MEM_SIZE) specifies the memory size which should be allocated by an xH driver for communication purposes. The property value type is BusComSize. This property is optional. If the property is not present, driver uses a default value which is driver implementation specific.
The "host" property (alias BUSCOM_GH_CLASS) attached to a node specifies that an xH driver instance running on the node should act as the GH driver. The property has no value.
The "site-min" (alias BUSCOM_GH_PROP_SITE_MIN) and "site-max" (alias BUSCOM_GH_PROP_SITE_MAX) properties specify a range of unique site identifiers assigned to the communication domain. Both properties use BusComSite as the value type. Both properties are optional.
The BUSCOM_DEF_SITE_MIN constant (0x1) is used as the default value for the BUSCOM_GH_PROP_SITE_MIN property, if the property is not present in the node.
The BUSCOM_DEF_SITE_MAX constant (0xffffffff) is used as the default value for the BUSCOM_GH_PROP_SITE_MAX property, if the property is not present in the node.
Unlike the Bus Control DDI which is a private interface for the physical communication layer, there are two public DDIs provided by the physical communication layer to the upper (logical) communication layer:
Local bus communication DDI.
Remote bus communication DDI.
The local BusCom DDI is provided by an xH driver instance running on the local site.
First of all, the local BusCom driver is responsible for allocating a system memory region for communication purposes and for making it accessible on all remote sites involved in the communication domain. In addition, the local communication driver allows a client to receive a cross interrupt sent from any remote site involved in the communication domain.
The character string "buscom-loc" (alias BUSCOM_LOCAL_CLASS) names the local BusCom device class. A pointer to the BusComLocOps structure is exported by the driver via the svDeviceRegister microkernel call. A driver client invokes the svDeviceLookup and svDeviceEntry microkernel calls in order to obtain a pointer to the device service routines vector. Once the pointer is obtained, the driver client is able to invoke the driver service routines via indirect function calls.
A local BusCom driver is mono client device driver. The device registry prevents multiple lookups being done on the same driver instance.
All methods defined by the BusComLocOps structure must be called in the DKI thread context.
typedef struct BusComLocOps { BusComLocVersion version; KnError (*open) (BusComId id, BusComConfig* config); KnError (*intr_attach) (BusComId id, BusComIntr intr, BusComIntrHandler handler, void* cookie, BusComIntrId* intr_id); void (*intr_detach) (BusComIntrId intr_id); void (*close) (BusComId id); } BusComLocOps;
The version field specifies the maximum local BusCom DDI version number supported by the driver.
The version number is incremented each time one of the local BusCom DDI structures is extended in order to include new service routines. In other words, a new symbol is added to the BusComLocVersion enum each time the API is extended in this way.
A driver client specifies a minimum DDI version number required by the client when calling svDeviceLookup(). ThesvDeviceLookup() routine does not allow a client to look up a driver instance if the DDI version number supported by the driver is less than the DDI version number required by the client.
A client that is aware of DDI extensions may still specify a minimum DDI version when looking for a device in the registry. Once a device is successfully found, the client may examine the version field in order to take advantage of the extended DDI features which may be supported by the device driver.
The open() method is the first call a client must make to a local BusCom device driver. The open() call is used to establish a connection to the driver. It enables subsequent invocation of the intr_attach(), intr_detach(), and close() routines.
The id input argument specifies a given local BusCom device driver instance. It is provided by the device registry entry.
Upon successful completion, the local BusCom driver returns K_OK and passes the communication resources (back to the client), through the config output argument.
The BusComConfig structure specifies the memory region allocated for the inter-bus communication.
typedef struct BusComConfig { void* mem_base; /* memory region base (virtual) address */ BusComSize mem_size; /* memory region size (in bytes) */ } BusComConfig;
The mem_base field specifies the region base address in the supervisor virtual address space. mem_size specifies the region size in bytes. This region is located in the local system memory and it is also accessible (through a bus) from any remote site in the communication domain.
Note that the memory region is zeroed by the local BusCom driver except the BusComHeader structure located at the beginning of the region. The BusComHeader structure is initialized by the local BusCom driver in the following way. Each byte in the lborder field contains its address, that is, byte 0 is 0, byte 1 is 1 and so forth. The rborder field is initialized to the BYTE_ORDER_LITTLE constant. The BusComHeader structure is typically used on a remote site in order to detect the memory byte order.
typedef struct BusComHeader { PropByteOrder lborder; /* 0,1,2,3 byte constant */ PropByteOrder rborder; /* BYTE_ORDER_LITTLE constant */ } BusComHeader;
The local BusCom driver returns K_ENOMEM if the system is out of memory resources. In this case, the config output argument is not modified.
The intr_attach() method connects a given client specific handler to a given (virtual) cross interrupt source.
The id input argument specifies a given local BusCom device driver instance. It is provided by the device registry entry.
The intr input argument is an integer value that specifies a given (virtual) cross interrupt source. Note that if a given interrupt number exceeds the number of physical cross interrupts supported by the hardware, the interrupt handler is connected to the last available physical cross interrupt. Note also that multiple handlers may be attached to the same interrupt source.
The intr_handler input argument specifies a client specific interrupt handler invoked by the local BusCom driver when a cross interrupt is received.
The intr_cookie input argument specifies a cookie being passed back to the interrupt handler.
typedef BusComIntrStatus (*BusComIntrHandler)(void* cookie);
Upon successful completion, the local BusCom driver returns K_OK and passes the interrupt identifier (back to the client) through the intr_id output argument.
The intr_id argument is opaque for the client. It must be passed back to the local BusCom driver as an argument in a subsequent invocation of the intr_detach() service routine.
When the interrupt handler is invoked, the local BusCom driver prevents re entry to the interrupt handler.
An interrupt handler must return a value specified by the BusComIntrStatus type:
typedef enum { BUSCOM_INTR_UNCLAIMED = 0, BUSCOM_INTR_CLAIMED } BusComIntrStatus;
An interrupt handler must return BUSCOM_INTR_UNCLAIMED if the interrupt is unclaimed, that is, there is no useful work done in the interrupt handler.
An interrupt handler must return BUSCOM_INTR_CLAIMED if the interrupt has been claimed, that is, there was a useful work done in the interrupt handler.
The local BusCom driver returns K_ENOMEM if the system is out of memory resources. In this case, the intr_id output argument is not altered.
The intr_detach() method disconnects the interrupt handler previously connected by intr_attach().
The intr_id input argument specifies the interrupt handler being disconnected. It is returned by intr_attach().
The close() method is used to close the connection to a local BusCom driver. This call must be the last call made to the local BusCom driver. The client is responsible for issuing intr_detach() for each attached interrupt handler prior to calling the close routine.
The id input argument specifies a given local BusCom device driver instance. It is given by the device registry entry.
The remote bus communication DDI is provided by each RS driver instance running on the local site and representing a remote site involved in the communication domain.
First of all, the remote BusCom driver is responsible for mapping a shared memory region allocated on the associated remote site into the supervisor address space in order to make it available for the communication protocol. In addition, the remote BusCom driver allows a client to send a cross interrupt to the associated remote site.
Such a cross interrupt will be received by an LH (or GH) driver running on this remote site. This will result in the interrupt handlers attached to this cross interrupt source being invoked.
The character string "buscom-rem" (alias BUSCOM_REMOTE_CLASS) names the remote BusCom device class. A pointer to the BusComRemOps structure is exported by the driver via the svDeviceRegister() microkernel call. A driver client invokes the svDeviceLookup() and svDeviceEntry() microkernel calls in order to obtain a pointer to the device service routines vector. Once the pointer is obtained, the driver client is able to invoke the driver service routines via indirect function calls.
A remote BusCom driver is mono client device driver. The device registry prevents multiple lookups being done on the same driver instance.
The open() and close() methods defined by the BusComRemOps structure must be called in the DKI thread context. The intr_trigger() method may be called at interrupt level.
typedef struct BusComRemOps { BusComRemVersion version; KnError (*open) (BusComId id, BusComConfig* config); void (*intr_trigger) (BusComId id, BusComIntr intr); void (*close) (BusComId id); } BusComRemOps;
The version field specifies the maximum remote BusCom DDI version number supported by the driver.
The version number is incremented each time one of the remote BusCom DDI structures is extended in order to include new service routines.
In other words, a new symbol is added to the BusComRemVersion enum each time the API is extended in this way.
A driver client specifies a minimal DDI version number required by the client when calling svDeviceLookup(). The svDeviceLookup() routine does not allow a client to look up a driver instance if the DDI version number supported by the driver is less than the DDI version number required by the client.
A client that is aware of DDI extensions may still specify a minimum DDI version when looking for a device in the registry. Once a device is successfully found, the client may examine the version field in order to take advantage of extended DDI features which may be supported by the device driver.
In the following description, a local site means the site on which the remote BusCom driver instance is running while a remote site means the remote site which is represented by this remote BusCom driver instance.
The open() method is the first call a client must make to a remote BusCom device driver. The open() call is used to establish a connection to the driver. It enables the subsequent invocation of the intr_trigger() and close() routines.
The id input argument specifies a given remote BusCom device driver instance. It is provided by the device registry entry.
Upon successful completion, the remote BusCom driver returns K_OK and passes the communication resources (back to the client) through the config output argument.
The BusComConfig structure specifies the memory region allocated for the inter-bus communication.
The mem_base field specifies the region base address in the supervisor virtual address space. mem_size specifies the region size in bytes. This region is located in the system memory of the remote site and it is accessible (through a bus) on this local site.
The BusComHeader structure is located at the beginning of the region. The BusComHeader structure is initialized on the remote site in the following way. Each byte in the lborder field contains its address, that is, byte 0 is 0, byte 1 is 1 and so forth. The rborder field is initalized to the BYTE_ORDER_LITTLE constant.
The BusComHeader structure fields are used to detect the memory byte order with respect to the system memory (lborder) and to the shared memory mapping on the remote site (rborder). The value read from the lborder field specifies the shared memory byte order on the local site (BYTE_ORDER_LITTLE or BYTE_ORDER_BIG). The value read from the rborder field specifies whether the byte order is inverted with respect to the memory mapping on the remote site. If the BYTE_ORDER_LITTLE value is read, the byte order is the same, otherwise the byte order is inverted.
The remote BusCom driver returns K_ENOMEM if the system is out of memory resources. In this case, the config output argument is not modified.
The intr_trigger() method is used to send a cross interrupt to the remote site.
The id input argument specifies a given remote BusCom device driver instance. It is given by the device registry entry.
The intr argument specifies the (virtual) cross interrupt event to send. Note that if a given cross interrupt number exceeds the number of physical cross interrupts supported by hardware, the last available physical cross interrupt is sent.
As was mentioned above, the intr_trigger() method may be called at interrupt level.
The close() method is used to close connection to a remote BusCom driver. This call must be the last call made to the remote BusCom driver.
The id input argument specifies a given remote BusCom device driver instance. It is given by the device registry entry.
A device node associated to a remote or local BusCom device driver instance has two properties:
Device position in the communication domain.
Device path in the communication domain.
The "domain" (alias BUSCOM_PROP_DOMAIN) property specifies the communication device position in the domain. The property value is a BusComPropDomain structure.
typedef struct BusComPropDomain { BusComSite local; BusComSite remote; } BusComPropDomain;
The BusComSite type (an integer 32-bit value) is used to enumerate all sites within a communication domain.
The local field of the BusComPropDomain structure specifies the site on which the driver instance is running. The remote field of the BusComPropDomain structure specifies the site which is represented by the driver instance.
Obviously, for a local BusCom driver instance both fields have the same value which designates the local site. On the other hand, for a remote BusCom driver instance these fields normally have different values.
The "path" (alias BUSCOM_PROP_PATH) property specifies the communication device path in the domain. The property value is a NULL terminated ASCII string. This path uniquely designates the remote site represented by the driver instance. Note that the remote site is equal to the local one for a local BusCom driver instance. So, for a local BusCom driver instance this property designates the site path in the communication domain.
A BusCom driver sends a shutdown event to its client in order to notify it about a site shutdown condition.
There are two events which may be delivered to a BusCom driver client through the device registry event mechanism:
DEV_EVENT_SHUTDOWN
DEV_EVENT_REMOVAL
The DEV_EVENT_SHUTDOWN event sent by a local BusCom device means that the local system is going to be shut down. So, the driver client is requested to gracefully shut down all connections in the communication domain and release all (local and remote) BusCom driver instances. Note that, the DEV_EVENT_SHUTDOWN event will also be signaled on each remote site for a remote BusCom driver instance representing this site.
The DEV_EVENT_REMOVAL event sent by a local BusCom device means that the local system has detected a fatal error. The driver client is requested to stop its activity as soon as possible and release all (local and remote) BusCom driver instances.
The DEV_EVENT_SHUTDOWN event sent by a remote BusCom device means that the remote system is going to be shut down. So, the driver client is requested to gracefully shut down all connections with this remote site and release the BusCom driver instance.
The DEV_EVENT_REMOVAL event sent by a remote BusCom device means that a fatal error (for example, bus time-out) has been detected while accessing remote memory or bridge interface registers. The driver client is requested to stop communication with this remote site as soon as possible and release the BusCom driver instance.
See attributes(5) for descriptions of the following attributes:
ATTRIBUTE TYPE | ATTRIBUTE VALUE |
---|---|
Interface Stability | Evolving |
NAME | SYNOPSIS | API RESTRICTIONS | FEATURES | DESCRIPTION | EXTENDED DESCRIPTION | Physical Communication Layer | Driver Classes | Bus Control Interface | Bus Communication DDI | Properties | Events | ATTRIBUTES | SEE ALSO