C H A P T E R 5 |
Interprocess Communication Software |
This chapter describes the Interprocess Communication (IPC) software. Topics include:
The Interprocess Communication (IPC) mechanism provides a means to communicate between processes that run in a domain under the Sun Netra DPS Lightweight Runtime Environment (LWRTE) and processes in a domain with a control plane operating system. This chapter gives an overview of the programming interfaces, shows how to set up an logical domains environment in which the IPC mechanism can be used, and explains the IPC specific portions of the IP forwarding reference application (see Reference Applications).
Chapter 5, Interprocess Communication API, of the Sun Netra Data Plane Software Suite 2.1 Update 1 Reference Manual contains a detailed description of all APIs needed to use IPC. The common API can be used in an operating system to connect to an IPC channel, and transmit and receive packets. First, the user must connect to the channel and register a function to receive packets. Once the channel is established this way, the ipc_tx()function can be used to transmit. The framework calls the registered callback function when a message is received.
In an Sun Netra DPS application, the programmer is responsible for calling the framework initialization routines for the IPC and LDC frameworks before using IPC, and must ensure that polling happens periodically.
In a Oracle Solaris domain, the IPC mechanism can be accessed from either user or kernel space. Before any API can be used, you must install the SUNWndpsd package using the pkgadd command, and you must add the tnsm driver to the system using add_drv. Refer to the respective man pages for detailed instructions. From the Oracle Solaris kernel, the common APIs mentioned above are used for IPC. In user space, the tnsm driver is seen as a character driver. The open(), ioctl(), read(), write(), and close() interfaces are used to connect to a channel, and send and receive messages.
This section describes the configuration of the environment needed to use the IPC framework. This section also covers setup of memory pools for the LWRTE application, the logical domains environment, and the IPC channels.
The IPC framework shares its memory pools with the basic logical domains framework. These pools are accessed through malloc() and free() functions that are implemented in the application. The ipfwd_ldom reference application contains an example implementation.
The file ldc_malloc_config.h contains definitions of the memory pools and their sizes. ldc_malloc.c contains the implementation of the malloc() and free() routines. These functions have the expected signatures:
In addition to these implementation files, the memory pools must be declared to the Sun Netra DPS runtime. This declaration is done in the software architecture definition in ipfwd_swarch.c.
In a logical domains environment, the IPC channels use logical domain channels (LDCs) as their transport media. These channels are set up as virtual data plane channels using the ldm command (see the Oracle VM Server for SPARC documentation). These channels are set up between a server and a client. Some basic configuration channels must be defined adhering to the naming convention described in Logical Domain Channel Setup. Each channel has a server defined in the LWRTE domain and a client defined in the link partner domain.
There must be a domain that has the right to set up IPC channels in the LWRTE domain. This domain can be the primary domain or a guest domain with the client for the configuration service. The administrator must only set up this channel. When the service (LWRTE) and the client domain are up (and the tnsm driver attached at the client), the special IPC channel with ID 0 is established automatically between the devices. The tnsmctl utility can then be used in the configuring domain to set up additional IPC channels (provided that the required virtual data plane channels have been configured.)
To enable IPC communications between the LWRTE domain and additional domains, a special configuration channel must be set up between these domains. Again, the channel names must adhere to a naming convention. In the LWRTE domain, the service name must begin with the prefix config-tnsm, whereas the client name in the other domain must be named config-tnsm0. For example, such a channel could be established using the ldm commands.
Additional channels can be added for data traffic between these domains, there are no naming conventions to follow for these channels. These commands are configured using the ldm commands.
Names for data plane channel servers and clients cannot be longer than 48 characters. This limit includes the prefixes of configuration channels.
Once the data plane channels are set up by the administrator in the primary domain, the tnsmctl utility is used to set up IPC channels from the IPC control domain. This utility is part of the SUNWndpd package and is located in the bin directory. tnsmctl uses the following syntax:
The parameters to tnsmctl are described in TABLE 5-1. All of these parameters need to be present to set up an IPC channel.
The tnsm driver stores the channel configuration so it can be replayed when the Sun Netra DPS domain reboots. This stored configuration can be purged through the following command:
Note - This option clears the stored configuration, but does not affect the currently operating channels. |
The following is a sample environment, complete with all commands needed to set up the environment in a Sun Fire T2000 server.
TABLE 5-2 describes the four environment domains.
The primary as well as the guest domains ldg2 and ldg3 run the Oracle Solaris 10
11/06 operating system (or later) with the patch level required for logical domain operation. The SUNWldm package is installed in the primary domain. The SUNWndpsd package is installed in both ldg2 and ldg3.
Assuming 4-GByte of memory for each of the domains, and starting with the factory default configuration, the environment can be set up using the following domain commands:
ldm remove-mau 8 primary
ldm remove-vcpu 28 primary
ldm remove-mem 28G primary (This assumes 32GByte of total memory. Adjust accordingly.)
ldm remove-io bus_b primary
ldm add-vsw mac-addr=you-mac-address net-dev=e1000g0 primary-vsw0
primary
ldm add-vds primary-vds0 primary
ldm add-vcc port-range=5000-5100 primary-vcc0 primary
ldm add-spconfig 4G4Csplit
ldm add-domain ldg1
ldm add-vcpu 20 ldg1
ldm add-mem 4G ldg1
ldm add-vnet mac-addr=your-mac-address-2 vnet0 primary-vsw0
ldg1
ldm add-var auto-boot\?=false ldg1
ldm add-io bus_b ldg1
ldm add-domain ldg2
ldm add-vcpu 4 ldg2
ldm add-mem 4G ldg2
ldm add-vnet mac-addr=your-mac-address-3 vnet0 primary-vsw0 ldg2
ldm add-vdsdev your-disk-file vol2@primary-vds0
ldm add-vdisk vdisk1 vol2@primary-vds0 ldg2
ldm add-var auto-boot\?=false ldg2
ldm add-var boot-device=/virtual-devices@100/channel-devices@200/disk@0 ldg2
ldm add-domain ldg3
ldm add-vcpu 4 ldg3
ldm add-mem 4G ldg3
ldm add-vnet mac-addr=your-mac-address-4 vnet0 primary-vsw0 ldg3
ldm add-vdsdev your-disk-file-2 vol3@primary-vds0
ldm add-vdisk vdisk1 vol3@primary-vds0 ldg3
ldm add-var auto-boot\?=false ldg3
ldm add-var boot-device=/virtual-devices@100/channel-devices@200/disk@0 ldg3
The disk files are created using the mkfile command. Oracle Solaris is installed once the domains are bound and started in a manner described in the Oracle VM Server for SPARC software documentation.
While the domains are unbound, the virtual data plane channels are configured in the primary domain as follows:
ldm add-vdpcs primary-gc ldg1
ldm add-vdpcc tnsm-gc0 primary-gc ldg3
ldm add-vdpcs config-tnsm-ldg2 ldg1
ldm add-vdpcc config-tnsm0 config-tnsm-ldg2 ldg2
ldm add-vdpcs ldg2-vdpcs0 ldg1
ldm add-vdpcc vdpcc0 ldg2-vdpcs0 ldg2
Additional data channels can be added with names selected by the system administrator. Once all channels are configured, the domains can be bound and started.
The IPC channels are configured using the /opt/SUNWndpsd/bin/tnsmctl utility in ldg3.
Before you can use the utility, you must install the SUNWndpsd package in both ldg3 and ldg2, using the pkgadd system administration command. After installing the package, you must add the tnsm driver by using the add_drv system administration command.
To be able to configure these channels, the output of ldm ls-bindings -e in the primary domain is needed to determine the LDC IDs. As an example, the relevant parts of the output for the configuration channel between ldg1 and ldg2 might appear as follows:
For ldg1:
For ldg2:
The channel uses the local LDC ID 6 in the LWRTE domain (ldg1) and remote LDC ID 5 in the Oracle Solaris domain. Given this information, and choosing channel ID 3 for the control channel, this channel is set up using the following command line:
After the control channel is set up, you can then set up the data channel between ldg1 and ldg2. Assuming local LDC ID 7, remote LDC ID 6, and IPC channel ID 4 (again, the LDC IDs must be determined using ldm ls-bindings -e), the following command line sets up the channel:
Note that the -C 4 parameter is the ID for the new channel. -F 3 has the channel ID of the control channel set up previously. After the completion of this command, the IPC channel is ready to be used by an application connecting to channel 4 on both sides. An example application using this channel is contained in the SUNWndps package, and described in the following section.
The example configuration described in Example Environment for UltraSPARC T1 Based Servers can be used with UltraSPARC T2 based servers with some minor modifications.
The UltraSPARC T2 chip has eight threads per core, so changing the number of vcpus in the primary from four to eight aligns the second domain to a core boundary.
In the environment in Example Environment for UltraSPARC T1 Based Servers, the primary domain owned one of the PCI buses (bus_a), while the Sun Netra DPS Runtime Environment domain owned the other one (bus_b). With a UltraSPARC T2 there is only one PCI bus (pci) and the network interface unit (niu). To set up an environment on such a system, the NIU should be removed from the primary domain and added to the Sun Netra DPS Runtime Environment domain (ldg1) so that the LWRTE domain can utilize NIU for fast packet processing applications.
In addition, the IP forwarding and RLP reference applications can use up to fifty six threads in the UltraSPARC T2 logical domain configurations depending on the configuration, so the Sun Netra DPS Runtime Environment domain must be sized accordingly.
The Sun Netra DPS package contains an IP forwarding reference application that uses the IPC mechanism. The Sun Netra DPS package contains an IP forwarding application in LWRTE and an Oracle Solaris utility that uses an IPC channel to upload the forwarding tables to the LWRTE domain. Sun Netra DPS chooses which table to use and where to gather some simple statistics, and displays the statistics in the Oracle Solaris domain. The application is designed to operate in the example setup shown in IPC Channels.
Refer to IP Packet Forwarding Reference Applications for details on how the IPC mechanism is used.
The common header file fibtable.h, located in the src/common/include subdirectory, contains the data structures shared between the Oracle Solaris and the LWRTE domains. In particular, the command header file contains the message formats for communication protocol used between the domains, and the IPC protocol number (201) that it uses. This file also contains the format of the forwarding table entries.
Copyright © 2011, Oracle and/or its affiliates. All rights reserved.