C H A P T E R  5

Interprocess Communication Software

This chapter describes the Interprocess Communication (IPC) software. Topics include:


IPC Introduction

The Interprocess Communication (IPC) mechanism provides a means to communicate between processes that run in a domain under the Sun Netra DPS Lightweight Runtime Environment (LWRTE) and processes in a domain with a control plane operating system. This chapter gives an overview of the programming interfaces, shows how to set up an logical domains environment in which the IPC mechanism can be used, and explains the IPC specific portions of the IP forwarding reference application (see Reference Applications).


Programming Interfaces Overview

Chapter 5, Interprocess Communication API, of the Sun Netra Data Plane Software Suite 2.1 Update 1 Reference Manual contains a detailed description of all APIs needed to use IPC. The common API can be used in an operating system to connect to an IPC channel, and transmit and receive packets. First, the user must connect to the channel and register a function to receive packets. Once the channel is established this way, the ipc_tx()function can be used to transmit. The framework calls the registered callback function when a message is received.

In an Sun Netra DPS application, the programmer is responsible for calling the framework initialization routines for the IPC and LDC frameworks before using IPC, and must ensure that polling happens periodically.

In a Oracle Solaris domain, the IPC mechanism can be accessed from either user or kernel space. Before any API can be used, you must install the SUNWndpsd package using the pkgadd command, and you must add the tnsm driver to the system using add_drv. Refer to the respective man pages for detailed instructions. From the Oracle Solaris kernel, the common APIs mentioned above are used for IPC. In user space, the tnsm driver is seen as a character driver. The open(), ioctl(), read(), write(), and close() interfaces are used to connect to a channel, and send and receive messages.


Configuring the Environment for IPC

This section describes the configuration of the environment needed to use the IPC framework. This section also covers setup of memory pools for the LWRTE application, the logical domains environment, and the IPC channels.

Memory Management

The IPC framework shares its memory pools with the basic logical domains framework. These pools are accessed through malloc() and free() functions that are implemented in the application. The ipfwd_ldom reference application contains an example implementation.

The file ldc_malloc_config.h contains definitions of the memory pools and their sizes. ldc_malloc.c contains the implementation of the malloc() and free() routines. These functions have the expected signatures:

In addition to these implementation files, the memory pools must be declared to the Sun Netra DPS runtime. This declaration is done in the software architecture definition in ipfwd_swarch.c.

IPC in the Logical Domains Environment

In a logical domains environment, the IPC channels use logical domain channels (LDCs) as their transport media. These channels are set up as virtual data plane channels using the ldm command (see the Oracle VM Server for SPARC documentation). These channels are set up between a server and a client. Some basic configuration channels must be defined adhering to the naming convention described in Logical Domain Channel Setup. Each channel has a server defined in the LWRTE domain and a client defined in the link partner domain.

Logical Domain Channel Setup

There must be a domain that has the right to set up IPC channels in the LWRTE domain. This domain can be the primary domain or a guest domain with the client for the configuration service. The administrator must only set up this channel. When the service (LWRTE) and the client domain are up (and the tnsm driver attached at the client), the special IPC channel with ID 0 is established automatically between the devices. The tnsmctl utility can then be used in the configuring domain to set up additional IPC channels (provided that the required virtual data plane channels have been configured.)

To enable IPC communications between the LWRTE domain and additional domains, a special configuration channel must be set up between these domains. Again, the channel names must adhere to a naming convention. In the LWRTE domain, the service name must begin with the prefix config-tnsm, whereas the client name in the other domain must be named config-tnsm0. For example, such a channel could be established using the ldm commands.

Additional channels can be added for data traffic between these domains, there are no naming conventions to follow for these channels. These commands are configured using the ldm commands.

Names for data plane channel servers and clients cannot be longer than 48 characters. This limit includes the prefixes of configuration channels.



Note - A Oracle Solaris domain may only have one configuration channel. In the configuration domain, where the channel client tnsm-gc0 is present, a channel client with the name config-tnsm0 must not be configured.


IPC Channel Setup

Once the data plane channels are set up by the administrator in the primary domain, the tnsmctl utility is used to set up IPC channels from the IPC control domain. This utility is part of the SUNWndpd package and is located in the bin directory. tnsmctl uses the following syntax:


# tnsmctl -S -C channel-id -L local-ldc -R remote-ldc -F control-channel-id

The parameters to tnsmctl are described in TABLE 5-1. All of these parameters need to be present to set up an IPC channel.


TABLE 5-1 tnsmctl Parameters

Parameter

Description

-S

Set up IPC channel.

-C channel-id

Channel ID of the new channel to be set up.

-L local-ldc

Local LDC ID of the Virtual Data Plane Channel to be used for this IPC channel. Local here always means local to the LWRTE domain. Obtain this LDC ID using the ldm list-bindings command.

-R remote-ldc

Remote LDC ID of the Virtual Data Plane Channel to be used for this IPC channel, that is, the LDC ID seen in the client domain. Obtain this LDC ID using the ldm list-bindings command with the -e flag.

-F control-channel-id

IPC channel ID of the control channel between the LWRTE and the client domain. If the client domain is the control domain, this channel ID is 0. For all other client domains, the control channel must be set up by the administrator. To set up the control channel, use the same ID for both the -C and the -F options.


The tnsm driver stores the channel configuration so it can be replayed when the Sun Netra DPS domain reboots. This stored configuration can be purged through the following command:


# tnsmctl -p



Note - This option clears the stored configuration, but does not affect the currently operating channels.



Example Environment for UltraSPARC T1 Based Servers

The following is a sample environment, complete with all commands needed to set up the environment in a Sun Fire T2000 server.

Domains

TABLE 5-2 describes the four environment domains.


TABLE 5-2 Environment Domains

Domain

Description

primary

Owns one of the PCI buses, and uses the physical disks and networking interfaces to provide virtual I/O to the Oracle Solaris guest domains.

ldg1

Owns the other PCI bus (bus_b) with its two network interfaces and runs an LWRTE application.

ldg2

Runs control plane applications and uses IPC channels to communicate with the LWRTE domain (ldg1).

ldg3

Controls the LWRTE domain through the global control channel. The tnsmctl utility is used here to set up IPC channels.


The primary as well as the guest domains ldg2 and ldg3 run the Oracle Solaris 10
11/06 operating system (or later) with the patch level required for logical domain operation. The SUNWldm package is installed in the primary domain. The SUNWndpsd package is installed in both ldg2 and ldg3.

Assuming 4-GByte of memory for each of the domains, and starting with the factory default configuration, the environment can be set up using the following domain commands:

primary

ldm remove-mau 8 primary
ldm remove-vcpu 28 primary
ldm remove-mem 28G primary
(This assumes 32GByte of total memory. Adjust accordingly.)
ldm remove-io bus_b primary
ldm add-vsw mac-addr=
you-mac-address net-dev=e1000g0 primary-vsw0
primary
ldm add-vds primary-vds0 primary
ldm add-vcc port-range=5000-5100 primary-vcc0 primary
ldm add-spconfig 4G4Csplit

ldg1 - LWRTE

ldm add-domain ldg1
ldm add-vcpu 20 ldg1
ldm add-mem 4G ldg1
ldm add-vnet mac-addr=your-mac-address-2 vnet0 primary-vsw0
ldg1
ldm add-var auto-boot\?=false ldg1
ldm add-io bus_b ldg1

ldg2 - Control Plane Application

ldm add-domain ldg2
ldm add-vcpu 4 ldg2
ldm add-mem 4G ldg2
ldm add-vnet mac-addr=
your-mac-address-3 vnet0 primary-vsw0 ldg2
ldm add-vdsdev
your-disk-file vol2@primary-vds0
ldm add-vdisk vdisk1 vol2@primary-vds0 ldg2
ldm add-var auto-boot\?=false ldg2
ldm add-var boot-device=/virtual-devices@100/channel-devices@200/disk@0 ldg2

ldg3 - Solaris Control Domain

ldm add-domain ldg3
ldm add-vcpu 4 ldg3
ldm add-mem 4G ldg3
ldm add-vnet mac-addr=
your-mac-address-4 vnet0 primary-vsw0 ldg3
ldm add-vdsdev
your-disk-file-2 vol3@primary-vds0
ldm add-vdisk vdisk1 vol3@primary-vds0 ldg3
ldm add-var auto-boot\?=false ldg3
ldm add-var boot-device=/virtual-devices@100/channel-devices@200/disk@0 ldg3

The disk files are created using the mkfile command. Oracle Solaris is installed once the domains are bound and started in a manner described in the Oracle VM Server for SPARC software documentation.

Virtual Data Plane Channels

While the domains are unbound, the virtual data plane channels are configured in the primary domain as follows:

Global Control Channel

ldm add-vdpcs primary-gc ldg1
ldm add-vdpcc tnsm-gc0 primary-gc ldg3

Client Control Channel

ldm add-vdpcs config-tnsm-ldg2 ldg1
ldm add-vdpcc config-tnsm0 config-tnsm-ldg2 ldg2

Data Channel

ldm add-vdpcs ldg2-vdpcs0 ldg1
ldm add-vdpcc vdpcc0 ldg2-vdpcs0 ldg2

Additional data channels can be added with names selected by the system administrator. Once all channels are configured, the domains can be bound and started.

IPC Channels

The IPC channels are configured using the /opt/SUNWndpsd/bin/tnsmctl utility in ldg3.

Before you can use the utility, you must install the SUNWndpsd package in both ldg3 and ldg2, using the pkgadd system administration command. After installing the package, you must add the tnsm driver by using the add_drv system administration command.

To be able to configure these channels, the output of ldm ls-bindings -e in the primary domain is needed to determine the LDC IDs. As an example, the relevant parts of the output for the configuration channel between ldg1 and ldg2 might appear as follows:

For ldg1:
For ldg2:
The channel uses the local LDC ID 6 in the LWRTE domain (ldg1) and remote LDC ID 5 in the Oracle Solaris domain. Given this information, and choosing channel ID 3 for the control channel, this channel is set up using the following command line:


VDPCS
        NAME               CLIENT                      LDC
        config-tnsm-ldg2   config-tnsm0@ldg2           6


VDPCC
        NAME               SERVICE                     LDC
        config-tnsm0       config-tnsm-ldg2@ldg1       5


# tnsmctl -S -C 3 -L 6 -R 5 -F 3

After the control channel is set up, you can then set up the data channel between ldg1 and ldg2. Assuming local LDC ID 7, remote LDC ID 6, and IPC channel ID 4 (again, the LDC IDs must be determined using ldm ls-bindings -e), the following command line sets up the channel:


# tnsmctl -S -C 4 -L 7 -R 6 -F 3

Note that the -C 4 parameter is the ID for the new channel. -F 3 has the channel ID of the control channel set up previously. After the completion of this command, the IPC channel is ready to be used by an application connecting to channel 4 on both sides. An example application using this channel is contained in the SUNWndps package, and described in the following section.


Example Environment for UltraSPARC T2 Based Servers

The example configuration described in Example Environment for UltraSPARC T1 Based Servers can be used with UltraSPARC T2 based servers with some minor modifications.

The UltraSPARC T2 chip has eight threads per core, so changing the number of vcpus in the primary from four to eight aligns the second domain to a core boundary.

In the environment in Example Environment for UltraSPARC T1 Based Servers, the primary domain owned one of the PCI buses (bus_a), while the Sun Netra DPS Runtime Environment domain owned the other one (bus_b). With a UltraSPARC T2 there is only one PCI bus (pci) and the network interface unit (niu). To set up an environment on such a system, the NIU should be removed from the primary domain and added to the Sun Netra DPS Runtime Environment domain (ldg1) so that the LWRTE domain can utilize NIU for fast packet processing applications.

In addition, the IP forwarding and RLP reference applications can use up to fifty six threads in the UltraSPARC T2 logical domain configurations depending on the configuration, so the Sun Netra DPS Runtime Environment domain must be sized accordingly.


IPC Reference Applications

The Sun Netra DPS package contains an IP forwarding reference application that uses the IPC mechanism. The Sun Netra DPS package contains an IP forwarding application in LWRTE and an Oracle Solaris utility that uses an IPC channel to upload the forwarding tables to the LWRTE domain. Sun Netra DPS chooses which table to use and where to gather some simple statistics, and displays the statistics in the Oracle Solaris domain. The application is designed to operate in the example setup shown in IPC Channels.

Refer to IP Packet Forwarding Reference Applications for details on how the IPC mechanism is used.

Common Header

The common header file fibtable.h, located in the src/common/include subdirectory, contains the data structures shared between the Oracle Solaris and the LWRTE domains. In particular, the command header file contains the message formats for communication protocol used between the domains, and the IPC protocol number (201) that it uses. This file also contains the format of the forwarding table entries.