C H A P T E R  5

Interprocess Communication Software

This chapter describes the interprocess communication (IPC) software. Topics include:


IPC Introduction

The interprocess communication (IPC) mechanism provides a means to communicate between processes that run in a domain under the Netra DPS Lightweight Runtime Environment (LWRTE) and processes in a domain with a control plane operating system. This chapter gives an overview of the programming interfaces, shows how to set up an LDoms environment in which the IPC mechanism can be used, and explains the IPC specific portions of the IP forwarding reference application (see Chapter 9, Reference Applications).


Programming Interfaces Overview

Chapter 5, Interprocess Communication API, of the Netra Data Plane Software Suite 2.0 Reference Manual contains a detailed description of all APIs needed to use IPC. The common API can be used in an operating system to connect to an IPC channel, and transmit and receive packets. First, the user must connect to the channel and register a function to receive packets. Once the channel is established this way, the ipc_tx()function can be used to transmit. The framework calls the registered callback function when a message is received.

In an Netra DPS application, the programmer is responsible for calling the framework initialization routines for the IPC and LDC frameworks before using IPC, and must ensure that polling happens periodically.

In a Solaris domain, the IPC mechanism can be accessed from either user or kernel space. Before any API can be used, you must install the SUNWndpsd package using the pkgadd command, and you must add the tnsm driver to the system using add_drv. Refer to the respective man pages for detailed instructions. From the Solaris kernel, the common APIs mentioned above are used for IPC. In user space, the tnsm driver is seen as a character driver. The open(), ioctl(), read(), write(), and close() interfaces are used to connect to a channel, and send and receive messages.


Configuring the Environment for IPC

This section describes the configuration of the environment needed to use the IPC framework. This section also covers setup of memory pools for the LWRTE application, the LDoms environment, and the IPC channels.

Memory Management

The IPC framework shares its memory pools with the basic LDoms framework. These pools are accessed through malloc() and free() functions that are implemented in the application. The ipfwd_ldom reference application contains an example implementation.

The file ldc_malloc_config.h contains definitions of the memory pools and their sizes. ldc_malloc.c contains the implementation of the malloc() and free() routines. These functions have the expected signatures:

In addition to these implementation files, the memory pools must be declared to the Netra DPS runtime. This declaration is done in the software architecture definition in ipfwd_swarch.c.

IPC in the LDoms Environment

In the LDoms environment, the IPC channels use Logical Domain Channels (LDCs) as their transport media. These channels are set up as Virtual Data Plane Channels using the ldm command (see the LDoms documentation). These channels are set up between a server and a client. Some basic configuration channels must be defined adhering to the naming convention described in LDoms Channel Setup. Each channel has a server defined in the LWRTE domain and a client defined in the link partner domain.

LDoms Channel Setup

There must be a domain that has the right to set up IPC channels in the LWRTE domain. This domain can be the primary domain or a guest domain with the client for the configuration service. The administrator must only set up this channel. When the service (LWRTE) and the client domain are up (and the tnsm driver attached at the client), the special IPC channel with ID 0 is established automatically between the devices. The tnsmctl utility can then be used in the configuring domain to set up additional IPC channels (provided that the required Virtual Data Plane Channels have been configured.)

To enable IPC communications between the LWRTE domain and additional domains, a special configuration channel must be set up between these domains. Again, the channel names must adhere to a naming convention. In the LWRTE domain, the service name must begin with the prefix config-tnsm, whereas the client name in the other domain must be named config-tnsm0. For example, such a channel could be established using the ldm commands.

Additional channels can be added for data traffic between these domains, there are no naming conventions to follow for these channels. These commands are configured using the ldm commands.

Names for data plane channel servers and clients cannot be longer than 48 characters. This limit includes the prefixes of configuration channels.



Note - A Solaris domain may only have one configuration channel. In the configuration domain, where the channel client tnsm-gc0 is present, a channel client with the name config-tnsm0 must not be configured.


IPC Channel Setup

Once the data plane channels are set up by the administrator in the primary domain, the tnsmctl utility is used to set up IPC channels from the IPC control domain. This utility is part of the SUNWndpds package and is located in the bin directory. tnsmctl uses the following syntax:


tnsmctl -S -C channel-id -L local-ldc -R remote-ldc -F control-channel-id

The parameters to tnsmctl are described in TABLE 5-1. All of these parameters need to be present to set up an IPC channel.


TABLE 5-1 tnsmctl Parameters

Parameter

Description

-S

Set up IPC channel.

-C channel-id

Channel ID of the new channel to be set up.

-L local-ldc

Local LDC ID of the Virtual Data Plane Channel to be used for this IPC channel. Local here always means local to the LWRTE domain. Obtain this LDC ID using the ldm list-bindings command.

-R remote-ldc

Remote LDC ID of the Virtual Data Plane Channel to be used for this IPC channel, that is, the LDC ID seen in the client domain. Obtain this LDC ID using the ldm list-bindings command.

-F control-channel-id

IPC channel ID of the control channel between the LWRTE and the client domain. If the client domain is the control domain, this channel ID is 0. For all other client domains, the control channel must be set up by the administrator. To set up the control channel, use the same ID for both the -C and the -F options.



Example Environment for UltraSPARC T1 Based Servers

The following is a sample environment, complete with all commands needed to set up the environment in a Sun Fire T2000 server.

Domains

TABLE 5-1 describes the four environment domains.


TABLE 5-2 Environment Domains

Domain

Description

primary

Owns one of the PCI buses, and uses the physical disks and networking interfaces to provide virtual I/O to the Solaris guest domains.

ldg1

Owns the other PCI bus (bus_b) with its two network interfaces and runs an LWRTE application.

ldg2

Runs control plane applications and uses IPC channels to communicate with the LWRTE domain (ldg1).

ldg3

Controls the LWRTE domain through the global control channel. The tnsmctl utility is used here to set up IPC channels.


The primary as well as the guest domains ldg2 and ldg3 run the Solaris 10
11/06 Operating System (or higher) with the patch level required for LDoms operation. The SUNWldm package is installed in the primary domain. The SUNWndpsd package is installed in both ldg2 and ldg3.

Assuming 4GByte of memory for each of the domains, and starting with the factory default configuration, the environment can be set up using the following domain commands:

primary

ldm remove-mau 8 primary
ldm remove-vcpu 28 primary
ldm remove-mem 28G primary
(This assumes 32GByte of total memory. Adjust accordingly.)
ldm remove-io bus_b primary
ldm add-vsw mac-addr=
you-mac-address net-dev=e1000g0 primary-vsw0
primary
ldm add-vds primary-vds0 primary
ldm add-vcc port-range=5000-5100 primary-vcc0 primary
ldm add-spconfig 4G4Csplit

ldg1 - LWRTE

ldm add-domain ldg1
ldm add-vcpu 20 ldg1
ldm add-mem 4G ldg1
ldm add-vnet mac-addr=your-mac-address-2 vnet0 primary-vsw0
ldg1
ldm add-var auto-boot\?=false ldg1
ldm add-io bus_b ldg1

ldg2 - Control Plane Application

ldm add-domain ldg2
ldm add-vcpu 4 ldg2
ldm add-mem 4G ldg2
ldm add-vnet mac-addr=
your-mac-address-3 vnet0 primary-vsw0 ldg2
ldm add-vdsdev
your-disk-file vol2@primary-vds0
ldm add-vdisk vdisk1 vol2@primary-vds0 ldg2
ldm add-var auto-boot\?=false ldg2
ldm add-var boot-device=/virtual-devices@100/channel-devices@200/disk@0 ldg2

ldg3 - Solaris Control Domain

ldm add-domain ldg3
ldm add-vcpu 4 ldg3
ldm add-mem 4G ldg3
ldm add-vnet mac-addr=
your-mac-address-4 vnet0 primary-vsw0 ldg3
ldm add-vdsdev
your-disk-file-2 vol3@primary-vds0
ldm add-vdisk vdisk1 vol3@primary-vds0 ldg3
ldm add-var auto-boot\?=false ldg3
ldm add-var boot-device=/virtual-devices@100/channel-devices@200/disk@0 ldg3

The disk files are created using the mkfile command. Solaris is installed once the domains are bound and started in a manner described in the LDoms administrator’s guide.

Virtual Data Plane Channels

While the domains are unbound, the Virtual Data Plane Channels are configured in the primary domain as follows:

Global Control Channel

ldm add-vdpcs primary-gc ldg1
ldm add-vdpcc tnsm-gc0 primary-gc ldg3

Client Control Channel

ldm add-vdpcs config-tnsm-ldg2 ldg1
ldm add-vdpcc config-tnsm0 config-tnsm-ldg2 ldg2

Data Channel

ldm add-vdpcs ldg2-vdpcs0 ldg1
ldm add-vdpcc vdpcc0 ldg2-vdpcs0 ldg2

Additional data channels can be added with names selected by the system administrator. Once all channels are configured, the domains can be bound and started.

IPC Channels

The IPC channels are configured using the /opt/SUNWndpsd/bin/tnsmctl utility in ldg3.

Before you can use the utility, you must install the SUNWndpsd package in both ldg3 and ldg2, using the pkgadd system administration command. After installing the package, you must add the tnsm driver by using the add_drv system administration command.

To be able to configure these channels, the output of ldm ls-bindings -e in the primary domain is needed to determine the LDC IDs. As an example, the relevant parts of the output for the configuration channel between ldg1 and ldg2 might appear as follows:

For ldg1:
For ldg2:
The channel uses the local LDC ID 6 in the LWRTE domain (ldg1) and remote LDC ID 5 in the Solaris domain. Given this information, and choosing channel ID 3 for the control channel, this channel is set up using the following command line:


VDPCS
        NAME               CLIENT                      LDC
        config-tnsm-ldg2   config-tnsm0@ldg2           6


VDPCS
        NAME               CLIENT                      LDC
        config-tnsm-ldg2   config-tnsm0@ldg2           6


tnsmctl -S -C 3 -L 6 -R 5 -F 3

After the control channel is set up, you can then set up the data channel between ldg1 and ldg2. Assuming local LDC ID 7, remote LDC ID 6, and IPC channel ID 4 (again, the LDC IDs must be determined using ldm ls-bindings -e), the following command line sets up the channel:


tnsmctl -S -C 4 -L 7 -R 6 -F 3

Note that the -C 4 parameter is the ID for the new channel. -F 3 has the channel ID of the control channel set up previously. After the completion of this command, the IPC channel is ready to be used by an application connecting to channel 4 on both sides. An example application using this channel is contained in the SUNWndps package, and described in the following section.


Example Environment for UltraSPARC T2 Based Servers

The example configuration described in Example Environment for UltraSPARC T1 Based Servers can be used with UltraSPARC T2 based servers with some minor modifications.

The UltraSPARC T2 chip has eight threads per core, so changing the number of vcpus in the primary from four to eight aligns the second domain to a core boundary.

In the environment in Example Environment for UltraSPARC T1 Based Servers, the primary domain owned one of the PCI buses (bus_a), while the Netra DPS Runtime Environment domain owned the other one (bus_b). With a UltraSPARC T2 there is only one PCI bus (pci) and the Network Interface Unit (niu). To set up an environment on such a system, the NIU should be removed from the primary domain and added to the Netra DPS Runtime Environment domain (ldg1).

In addition, the IP forwarding and RLP reference applications use forty threads in the UltraSPARC T2 LDoms configurations, and the Netra DPS Runtime Environment domain must be sized accordingly.


Reference Applications

The Netra DPS package contains an IP forwarding reference application that uses the IPC mechanism. The Netra DPS package contains an IP forwarding application in LWRTE and a Solaris utility that uses an IPC channel to upload the forwarding tables to the LWRTE domain (see Forwarding Application). Netra DPS chooses which table to use and where to gather some simple statistics, and displays the statistics in the Solaris domain. The application is designed to operate in the example setup shown in IPC Channels.

Common Header

The common header file fibtable.h, located in the src/common/include subdirectory, contains the data structures shared between the Solaris and the LWRTE domains. In particular, the command header file contains the message formats for communication protocol used between the domains, and the IPC protocol number (201) that it uses. This file also contains the format of the forwarding table entries.

Solaris Utility Code

The code for the Solaris utility is in the src/solaris subdirectory and is composed of the single file fibctl.c. This file implements a simple CLI to control the forwarding application running in the LWRTE domain. The application is built using gmake in the directory and deployed into a domain that has an IPC channel to the LWRTE domain established. The program opens the tnsm driver and offers the following commands:

connect Channel_ID

Connects to the channel with ID Channel_ID. The forwarding application is hard coded to use channel ID 4. The IPC type is hard coded on both sides. This command must be issued before any of the other commands.

use-table Table_ID

Instructs the forwarding application to use the specified table. In the current code, the table ID must be 0 or 1.

write-table Table_ID

Transmits the table with the indicated ID to the forwarding application. There are two predefined tables in the application.

stats

Requests statistics from the forwarding application and displays them.

read

Reads an IPC message that has been received from the forwarding application. Currently not used.

status

Issues the TNIPC_IOC_CH_STATUS ioctl.

exit / x / quit /q

Exits the program.

help

Contains program help information.

Forwarding Application

The code that implements the forwarding application consists of two components:

The hardware architecture is identical to the default architecture in all other reference applications.

The software architecture differs from other applications in that it contains code for the specific number of strands that the target LDoms will have. Also, the memory pools used in the malloc() and free() implementation for the LDoms and IPC frameworks are declared here.

The mapping file contains a mapping for each strand of the target LDoms.

The rx.c and tx.c files contain simple functions that use the Ethernet driver to receive and transmit a packet, respectively.

ldc_malloc.c contains the implementation of the memory allocation algorithm. The corresponding header file, ldc_malloc_config.h, contains some configuration for the memory pools used.

user_common.c contains the memory allocation provided for the Ethernet driver, as well as the definition for the queues used to communicate between the strands. The corresponding header file, user_common.h, contains function prototypes for the routines used in the application, as well as declarations for the common data structures.

ipfwd.c contains the definition of the functions that are run on the different strands. In this version of the application, all strands start the _main() function. Based on the thread IDs, the _main() function calls the respective functions for rx, tx, forwarding, a thread for IPC, the cli, and statistics gathering.

The main functionality is provided by the following processes:

The IP forwarding algorithm called by the forwarding thread is implemented in ipfwd_lib.c. The lookup algorithm used is a simple linear search through the forwarding table. The destination MAC address is set according to the forwarding entry found, and the TTL is decremented.

ipfwd_config.h contains configuration for the forwarding application, such as the number of strands and memory sizes used.

init.c contains the initialization code for the application. First, the queues are initialized. Initialization of the Ethernet interfaces is left to the rx strands, but the tx strands must wait until that initialization is done before they can proceed. The initialization of the LDoms framework is accomplished using calls to the functions mach_descrip_init(), lwrte_cnex_init(), and lwrte_init_ldc(). After this initialization, the IPC framework is initialized by a call of tnipc_init(). The previousfour functions must be called in this specific order. The data structures are then initialized for the forwarding tables.

The forwarding application can be built using the build script located in the main application directory. For this application in an LDoms environment:

To deploy the application, the image must be copied to a tftp server. The image can then be booted using a network boot from either one of the Ethernet ports, or from a virtual network interface. See the README file for details. After booting the application, the IPC channels are initialized as described in Example Environment for UltraSPARC T1 Based Servers. Once the IPC channels are up, you can use the fibctl utility to manipulate the forwarding tables and gather statistics.