Network Interface Guide

Chapter 3 Programming With XTI and TLI

The X/Open Transport Interface (XTI) and the Transport Layer Interface (TLI) are a set of functions that constitute a network programming interface. XTI is an evolution from the older TLI interface available on SunOS 4. Both interfaces are supported, though XTI represents the future direction of this set of interfaces.

XTI/TLI Is Multithread Safe

The interfaces described in this chapter are multithread safe. This means that applications containing XTI/TLI function calls can be used freely in a multithreaded application. However, the degree of concurrency available to applications is not specified.

XTI/TLI Are Not Asynchronous Safe

The XTI/TLI interface behavior has not been well specified in an asynchronous environment. It is not recommended that these interfaces be used from signal handler routines.

What Are XTI and TLI?

TLI was introduced with AT&T's System V, Release 3 in 1986. It provided a transport layer interface API. TLI was modeled after the ISO Transport Service Definition and provides an API between the OSI transport and session layers. TLI interfaces evolved further in AT&T System V, Release 4 version of Unix and were made available in SunOS 5.6 operating system interfaces, too.

XTI interfaces are an evolution of TLI interfaces and represent the future direction of this family of interfaces. Compatibility for applications using TLI interfaces is available. There is no intrinsic need to port TLI applications to XTI immediately. New applications can use the XTI interfaces and older applications can be ported to XTI when necessary.

TLI is implemented as a set of function calls in a library (libnsl) with which the applications link. XTI applications are compiled using the c89 front end and must be linked with the xnet library (libxnet). For additional information on compiling with XTI, see standards(5).


Note -

An application using the XTI interface uses the xti.h header file, whereas an application using the TLI interface includes the tiuser.h header file.


Intrinsic to XTI/TLI are the notions of transport endpoints and a transport provider. The transport endpoints are two entities that are communicating, and the transport provider is the set of routines on the host that provides the underlying communication support. XTI/TLI is the interface to the transport provider, not the provider itself. See Figure 3-1.

Figure 3-1 How XTI/TLI Works

Graphic

XTI/TLI code can be written to be independent of current transport providers in conjunction with some additional interfaces and mechanisms described in Chapter 4. The SunOS 5 product includes some transport providers (TCP, for example) as part of the base operating system. A transport provider performs services, and the transport user requests the services. The transport user issues service requests to the transport provider. An example is a request to transfer data over a connection TCP and UDP.

XTI/TLI can also be used for transport-independent programming. XTI/TLI has two components to achieve this:

XTI/TLI provides two modes of service: connection mode and connectionless mode. The next two sections give an overview of these modes.

Connectionless Mode

Connectionless mode is message oriented. Data are transferred in self-contained units with no relationship between the units. This service requires only an established association between the peer users that determines the characteristics of the data. All information required to deliver a message (such as the destination address) is presented to the transport provider, with the data to be transmitted, in one service request. Each message is entirely self-contained. Use connectionless mode service for applications that:

Connectionless transports can be unreliable. They need not necessarily maintain message sequence, and messages are sometimes lost.

Connectionless Mode Routines

Connectionless-mode transport service has two phases: local management and data transfer. The local management phase defines the same local operations as for the connection mode service.

The data transfer phase lets a user transfer data units (usually called datagrams) to the specified peer user. Each data unit must be accompanied by the transport address of the destination user. t_sndudata(3NSL) sends and t_rcvvudata(3NSL) receives messages. Table 3-1 summarizes all routines for connectionless mode data transfer.

Table 3-1 Routines for Connectionless-Mode Data Transfer

Command 

Description 

t_sndudata

Sends a message to another user of the transport 

t_rcvudata

Receives a message sent by another user of the transport 

t_rcvuderr

Retrieves error information associated with a previously sent message 

Connectionless Mode Service

Connectionless mode service is appropriate for short-term request/response interactions, such as transaction-processing applications. Data are transferred in self-contained units with no logical relationship required among multiple units.

Endpoint Initiation

Transport users must initialize XTI/TLI endpoints before transferring data. They must choose the appropriate connectionless service provider using t_open(3NSL) and establish its identity using t_bind(3NSL).

Use t_optmgmt(3NSL) to negotiate protocol options. Like connection mode service, each transport provider specifies the options, if any, it supports. Option negotiation is a protocol-specific activity. In Example 3-1, the server waits for incoming queries, and processes and responds to each query. The example also shows the definitions and initiation sequence of the server.


Example 3-1 CLTS Server

#include <stdio.h>
#include <fcntl.h>
#include <xti.h>	/* TLI applications use <tiuser.h>  */
#define SRV_ADDR 2	/* server's well known address */

main()
{
   int fd;
   int flags;
   struct t_bind *bind;
   struct t_unitdata *ud;
   struct t_uderr *uderr;
   extern int t_errno;

	if ((fd = t_open("/dev/exmp", O_RDWR, (struct t_info *) NULL))
	        == -1) {
      t_error("unable to open /dev/exmp");
      exit(1);
   }
 	if ((bind = (struct t_bind *)t_alloc(fd, T_BIND, T_ADDR))
         == (struct t_bind *) NULL) {
      t_error("t_alloc of t_bind structure failed");
      exit(2);
   }
   bind->addr.len = sizeof(int);
   *(int *)bind->addr.buf = SRV_ADDR;
   bind->qlen = 0;
   if (t_bind(fd, bind, bind) == -1) {
      t_error("t_bind failed");
      exit(3);
   }
   /*
    * TLI interface applications need the following code which
    * is no longer needed for XTI interface applications.
    * -------------------------------------
    * Verify if the bound address correct?
    *
    * if (bind -> addr.len != sizeof(int) ||
    *      *(int *)bind->addr.buf != SRV_ADDR) {
    *	fprintf(stderr, "t_bind bound wrong address\n");
    *	exit(4);
    * }
    * ---------------------------------------
    */

The server establishes a transport endpoint with the desired transport provider using t_open(3NSL). Each provider has an associated service type, so the user can choose a particular service by opening the appropriate transport provider file. This connectionless mode server ignores the characteristics of the provider returned by t_open(3NSL) by setting the third argument to NULL. The transaction server assumes the transport provider has the following characteristics:

The connectionless server binds a transport address to the endpoint so that potential clients can access the server. A t_bind structure is allocated using t_alloc(3NSL) and the buf and len fields of the address are set accordingly.

One difference between a connection mode server and a connectionless mode server is that the qlen field of the t_bind structure is 0 for connectionless mode service. There are no connection requests to queue.

XTI/TLI interfaces define an inherent client-server relationship between two users while establishing a transport connection in the connection mode service. No such relationship exists in connectionless mode service.

TLI requires that the server check the bound address returned by t_bind(3NSL) to ensure that it is the same as the one supplied. t_bind(3NSL) can also bind the endpoint to a separate, free address if the one requested is busy.

Data Transfer

After a user has bound an address to the transport endpoint, datagrams can be sent or received over the endpoint. Each outgoing message carries the address of the destination user. XTI/TLI also lets you specify protocol options to the transfer of the data unit (for example, transit delay). Each transport provider defines the set of options on a datagram. When the datagram is passed to the destination user, the associated protocol options can be passed, too.

Example 3-2 illustrates the data transfer phase of the connectionless mode server.


Example 3-2 Data Transfer Routine

	if ((ud = (struct t_unitdata *) t_alloc(fd, T_UNITDATA,T_ALL))
         == (struct t_unitdata *) NULL) {
      t_error("t_alloc of t_unitdata struct failed");
      exit(5);
   }
   if ((uderr = (struct t_uderr *) t_alloc(fd, T_UDERROR, T_ALL))
         == (struct t_uderr *) NULL) {
      t_error("t_alloc of t_uderr struct failed");
      exit(6);
   }
   while(1) {
      if (t_rcvudata(fd, ud, &flags) == -1) {
         if (t_errno == TLOOK) {
               /* Error on previously sent datagram */
               if(t_rcvuderr(fd, uderr) == -1) {
                  exit(7);
               }
            fprintf(stderr, "bad datagram, error=%d\n",
               uderr->error);
            continue;
         }
         t_error("t_rcvudata failed");
         exit(8);
      }
      /*
       * Query() processes the request and places the response in
       * ud->udata.buf, setting ud->udata.len
       */
      query(ud);
      if (t_sndudata(fd, ud) == -1) {
         t_error("t_sndudata failed");
         exit(9);
      }
   }
}

/* ARGS USED */
void
query(ud)
struct t_unitdate *ud;
{
   /* Merely a stub for simplicity */
}

To buffer datagrams, the server first allocates a t_unitdata structure, which has the following format:

struct t_unitdata {
 	struct netbuf addr;
 	struct netbuf opt;
 	struct netbuf udata;
}

addr holds the source address of incoming datagrams and the destination address of outgoing datagrams. opt holds any protocol options on the datagram. udata holds the data. The addr, opt, and udata fields must all be allocated with buffers large enough to hold any possible incoming values. The T_ALL argument of t_alloc(3NSL) ensures this and sets the maxlen field of each netbuf structure accordingly. The provider does not support protocol options in this example, so maxlen is set to 0 in the opt netbufstructure. The server also allocates a t_uderr structure for datagram errors.

The transaction server loops forever, receiving queries, processing the queries, and responding to the clients. It first calls t_rcvudata(3NSL) to receive the next query. t_rcvudata(3NSL) blocks until a datagram arrives, and returns it.

The second argument of t_rcvudata(3NSL) identifies the t_unitdata structure in which to buffer the datagram.

The third argument, flags, points to an integer variable and can be set to T_MORE on return from t_rcvudata(3NSL) to indicate that the user's udata buffer is too small to store the full datagram.

If this happens, the next call to t_rcvudata(3NSL) retrieves the rest of the datagram. Because t_alloc(3NSL) allocates a udata buffer large enough to store the maximum size datagram, this transaction server does not have to check flags. This is true only of t_rcvudata(3NSL) and not of any other receive primitives.

When a datagram is received, the transaction server calls its query routine to process the request. This routine stores a response in the structure pointed to by ud, and sets ud->udata.len to the number of bytes in the response. The source address returned by t_rcvudata(3NSL) in ud->addr is the destination address for t_sndudata(3NSL). When the response is ready, t_sndudata(3NSL) is called to send the response to the client.

Datagram Errors

If the transport provider cannot process a datagram sent by t_sndudata(3NSL), it returns a unit data error event, T_UDERR, to the user. This event includes the destination address and options of the datagram, and a protocol-specific error value that identifies the error. Datagram errors are protocol specific.


Note -

A unit data error event does not always indicate success or failure in delivering the datagram to the specified destination. Remember, connectionless service does not guarantee reliable delivery of data.


The transaction server is notified of an error when it tries to receive another datagram. In this case, t_rcvudata(3NSL) fails, setting t_errno to TLOOK. If TLOOK is set, the only possible event is T_UDERR, so the server calls t_rcvudata(3NSL) to retrieve the event. The second argument of t_rcvuderr(3NSL) is the t_uderr structure that was allocated earlier. This structure is filled in by t_rcvuderr(3NSL) and has the following format:

struct t_uderr {
 	struct netbuf addr;
 	struct netbuf opt;
 	t_scalar_t error;
}

where addr and opt identify the destination address and protocol options specified in the bad datagram, and error is a protocol-specific error code. The transaction server prints the error code, then continues.

Connection Mode

Connection mode is circuit oriented. Data are transmitted in sequence over an established connection. The mode also provides an identification procedure that avoids address resolution and transmission in the data transfer phase. Use this service for applications that require data-stream-oriented interactions. Connection mode transport service has four phases:

The local management phase defines local operations between a transport user and a transport provider, as shown in Figure 3-2. For example, a user must establish a channel of communication with the transport provider. Each channel between a transport user and transport provider is a unique endpoint of communication, and is called the transport endpoint. t_open(3NSL) lets a user choose a particular transport provider to supply the connection mode services, and establishes the transport endpoint.

Figure 3-2 Transport Endpoint

Graphic

Connection Mode Routines

Each user must establish an identity with the transport provider. A transport address is associated with each transport endpoint. One user process can manage several transport endpoints. In connection mode service, one user requests a connection to another user by specifying the other's address. The structure of a transport address is defined by the transport provider. An address can be as simple as an unstructured character string (for example, file_server), or as complex as an encoded bit pattern that specifies all information needed to route data through a network. Each transport provider defines its own mechanism for identifying users. Addresses can be assigned to the endpoint of a transport by t_bind(3NSL).

In addition to t_open(3NSL) and t_bind(3NSL), several routines support local operations. Table 3-2 summarizes all local management routines of XTI/TLI.

Table 3-2 Routines of XTI/TLI for Operating on the Endpoint

Command 

Description 

t_alloc

Allocates XTI/TLI data structures 

t_bind

Binds a transport address to a transport endpoint 

t_close

Closes a transport endpoint 

t_error

Prints an XTI/TLI error message 

t_free

Frees structures allocated using t_alloc(3NSL)

t_getinfo

Returns a set of parameters associated with a particular transport provider 

t_getprotaddr

Returns the local and/or remote address associated with endpoint (XTI only) 

t_getstate

Returns the state of a transport endpoint 

t_look

Returns the current event on a transport endpoint 

t_open

Establishes a transport endpoint connected to a chosen transport provider 

t_optmgmt

Negotiates protocol-specific options with the transport provider 

t_sync

Synchronizes a transport endpoint with the transport provider 

t_unbind

Unbinds a transport address from a transport endpoint 

The connection phase lets two users create a connection, or virtual circuit, between them, as shown in Figure 3-3.

Figure 3-3 Transport Connection

Graphic

For example, the connection phase occurs when a server advertises its service to a group of clients, then blocks on t_listen(3NSL) to wait for a request. A client tries to connect to the server at the advertised address by a call to t_connect(3NSL). The connection request causes t_listen(3NSL) to return to the server, which can call t_accept(3NSL) to complete the connection.

Table 3-3 summarizes all routines available for establishing a transport connection. Refer to man pages for the specifications on these routines.

Table 3-3 Routines for Establishing a Transport Connection

Command 

Description 

t_accept

Accepts a request for a transport connection 

t_connect

Establishes a connection with the transport user at a specified destination 

t_listen

Listens for connect request from another transport user 

t_rcvconnect

Completes connection establishment if t_connect(3NSL) was called in asynchronous mode (see "Advanced Topics")

The data transfer phase lets users transfer data in both directions through the connection. t_snd(3NSL) sends and t_rcv(3NSL) receives data through the connection. It is assumed that all data sent by one user is guaranteed to be delivered to the other user in the order in which it was sent. Table 3-4 summarizes the connection mode data-transfer routines.

Table 3-4 Connection Mode Data Transfer Routines

Command 

Description 

t_rcv(3NSL)

Receives data that has arrived over a transport connection 

t_snd(3NSL)

Sends data over an established transport connection 

XTI/TLI has two types of connection release. The abortive release directs the transport provider to release the connection immediately. Any previously sent data that has not yet been transmitted to the other user can be discarded by the transport provider. t_snddis(3NSL) initiates the abortive disconnect. t_rcvdis(3NSL) receives the abortive disconnect. Transport providers usually support some form of abortive release procedure.

Some transport providers also support an orderly release that terminates communication without discarding data. t_sndrel(3NSL) and t_rcvrel(3NSL) perform this function. Table 3-5 summarizes the connection release routines. Refer to man pages for the specifications on these routines.

Table 3-5 Connection Release Routines

Command 

Description 

t_rcvdis(3NSL)

Returns a reason code for a disconnection and any remaining user data 

t_rcvrel(3NSL)

Acknowledges receipt of an orderly release of a connection request 

t_snddis(3NSL)

Aborts a connection or rejects a connect request 

t_sndrel(3NSL)

Requests the orderly release of a connection 

Connection Mode Service

The main concepts of connection mode service are illustrated through a client program and its server. The examples are presented in segments.

In the examples, the client establishes a connection to a server process. The server transfers a file to the client. The client receives the file contents and writes them to standard output.

Endpoint Initiation

Before a client and server can connect, each must first open a local connection to the transport provider (the transport endpoint) through t_open(3NSL), and establish its identity (or address) through t_bind(3NSL).

Many protocols perform a subset of the services defined in XTI/TLI. Each transport provider has characteristics that determine the services it provides and limit the services. Data defining the transport characteristics are returned by t_open(3NSL) in a t_info structure. Table 3-6 shows the fields in a t_info structure.

Table 3-6 t_info Structure

Field 

Content 

addr

Maximum size of a transport address  

options

Maximum bytes of protocol-specific options that can be passed between the transport user and transport provider  

tsdu

Maximum message size that can be transmitted in either connection mode or connectionless mode  

etsdu

Maximum expedited data message size that can be sent over a transport connection  

connect

Maximum number of bytes of user data that can be passed between users during connection establishment  

discon

Maximum bytes of user data that can be passed between users during the abortive release of a connection  

servtype

The type of service supported by the transport provider  

The three service types defined by XTI/TLI are:

  1. T_COTS -- The transport provider supports connection mode service but does not provide the orderly release facility. Connection termination is abortive, and any data not already delivered is lost.

  2. T_COTS_ORD -- The transport provider supports connection mode service with the orderly release facility.

  3. T_CLTS -- The transport provider supports connectionless mode service.

Only one such service can be associated with the transport provider identified by t_open(3NSL).

t_open(3NSL) returns the default provider characteristics of a transport endpoint. Some characteristics can change after an endpoint has been opened. This happens with negotiated options (option negotiation is described later in this section). t_getinfo(3NSL) returns the current characteristics of a transport endpoint.

After a user establishes an endpoint with the chosen transport provider, the client and server must establish their identities. t_bind(3NSL) does this by binding a transport address to the transport endpoint. For servers, this routine informs the transport provider that the endpoint is used to listen for incoming connect requests.

t_optmgmt(3NSL) can be used during the local management phase. It lets a user negotiate the values of protocol options with the transport provider. Each transport protocol defines its own set of negotiable protocol options, such as quality-of-service parameters. Because the options are protocol-specific, only applications written for a specific protocol use this function.

Client

The local management requirements of the example client and server are used to discuss details of these facilities. Example 3-3 shows the definitions needed by the client program, followed by its necessary local management steps.


Example 3-3 Client Implementation of Open and Bind

#include <stdio.h>
#include <tiuser.h>
#include <fcntl.h>
#define SRV_ADDR 1 									/* server's address */

main()
{
   int fd;
   int nbytes;
   int flags = 0;
   char buf[1024];
   struct t_call *sndcall;
   extern int t_errno;

   if ((fd = t_open("/dev/exmp", O_RDWR, (struct t_info *),NULL))
         == -1) {
      t_error("t_open failed");
      exit(1);
   }
   if (t_bind(fd, (struct t_bind *) NULL, (struct t_bind *) NULL)
         == -1) {
      t_error("t_bind failed");
      exit(2);
   }

The first argument of t_open(3NSL) is the path of a file system object that identifies the transport protocol. /dev/exmp is the example name of a special file that identifies a generic, connection-based transport protocol. The second argument, O_RDWR, specifies to open for both reading and writing. The third argument points to a t_info structure in which to return the service characteristics of the transport.

This data is useful to write protocol-independent software (see "Guidelines to Protocol Independence"). In this example, a NULL pointer is passed. For Example 3-3, the transport provider must have the following characteristics:

If the user needs a service other than T_COTS_ORD, another transport provider can be opened. An example of the T_CLTS service invocation is shown in the section "Read/Write Interface".

t_open(3NSL) returns the transport endpoint file handle that is used by all subsequent XTI/TLI function calls. The identifier is a file descriptor from opening the transport protocol file. See open(2).

The client then calls t_bind(3NSL) to assign an address to the endpoint. The first argument of t_bind(3NSL) is the transport endpoint handle. The second argument points to a t_bind structure that describes the address to bind to the endpoint. The third argument points to a t_bind structure that describes the address that the provider has bound.

The address of a client is rarely important because no other process tries to access it. That is why the second and third arguments to t_bind(3NSL) are NULL. The second NULL argument directs the transport provider to choose an address for the user.

If t_open(3NSL) or t_bind(3NSL) fails, the program calls t_error(3NSL) to display an appropriate error message by stderr. The global integer t_error(3NSL) is assigned an error value. A set of error values is defined in tiuser.h.

t_error(3NSL) is analogous to perror(3C). If the transport function error is a system error, t_errno(3NSL) is set to TSYSERR, and errno is set to the appropriate value.

Server

The server example must also establish a transport endpoint at which to listen for connection requests. Example 3-4 shows the definitions and local management steps.


Example 3-4 Server Implementation of Open and Bind

#include <tiuser.h>
#include <stropts.h>
#include <fcntl.h>
#include <stdio.h>
#include <signal.h>

#define DISCONNECT -1
#define SRV_ADDR 1								/* server's address */
int conn_fd;			/* connection established here */
extern int t_errno;

main()
{
   int listen_fd;							/* listening transport endpoint */
   struct t_bind *bind;
   struct t_call *call;

   if ((listen_fd = t_open("/dev/exmp", O_RDWR,
      (struct t_info *) NULL)) == -1) {
      t_error("t_open failed for listen_fd");
      exit(1);
   }
   if ((bind = (struct t_bind *)t_alloc( listen_fd, T_BIND, T_ALL))
         == (struct t_bind *) NULL) {
      t_error("t_alloc of t_bind structure failed");
      exit(2);
   }
   bind->qlen = 1;
   
   /*
    * Because it assumes the format of the provider's address,
    * this program is transport-dependent
    */
    bind->addr.len = sizeof(int);
   *(int *) bind->addr.buf = SRV_ADDR;
   if (t_bind (listen_fd, bind, bind) < 0 ) {
      t_error("t_bind failed for listen_fd");
      exit(3);
   }

   #if (!defined(_XOPEN_SOURCE) ||(_XOPEN_SOURCE_EXTENDED -0 != 1))
   /* 
    * Was the correct address bound? 
    * 
    * When using XTI, this test is unnecessary 
    */

   if (bind->addr.len != sizeof(int) ||
      *(int *)bind->addr.buf != SRV_ADDR) {
      fprintf(stderr, "t_bind bound wrong address\n");
      exit(4);
    }
    #endif

Like the client, the server first calls t_open(3NSL) to establish a transport endpoint with the desired transport provider. The endpoint, listen_fd, is used to listen for connect requests.

Next, the server binds its address to the endpoint. This address is used by each client to access the server. The second argument points to a t_bind structure that specifies the address to bind to the endpoint. The t_bind structure has the following format:

struct t_bind {
 	struct netbuf addr;
 	unsigned qlen;
}

Where addr describes the address to be bound, and qlen specifies the maximum number of outstanding connect requests. All XTI structure and constant definitions made visible for use by applications programs through xti.h. All TLI structure and constant definitions are in tiuser.h.

The address is specified in the netbuf structure with the following format:

struct netbuf {
 	unsigned int maxlen;
 	unsigned int len;
 	char *buf;
}

Where maxlen specifies the maximum length of the buffer in bytes, len specifies the bytes of data in the buffer, and buf points to the buffer that contains the data.

In the t_bind structure, the data identifies a transport address. qlen specifies the maximum number of connect requests that can be queued. If the value of qlen is positive, the endpoint can be used to listen for connect requests. t_bind(3NSL) directs the transport provider to queue connect requests for the bound address immediately. The server must dequeue each connect request and accept or reject it. For a server that fully processes a single connect request and responds to it before receiving the next request, a value of 1 is appropriate for qlen. Servers that dequeue several connect requests before responding to any should specify a longer queue. The server in this example processes connect requests one at a time, so qlen is set to 1.

t_alloc(3NSL) is called to allocate the t_bind structure. t_alloc(3NSL) has three arguments: a file descriptor of a transport endpoint; the identifier of the structure to allocate; and a flag that specifies which, if any, netbuf buffers to allocate. T_ALL specifies to allocate all netbuf buffers, and causes the addr buffer to be allocated in this example. Buffer size is determined automatically and stored in the maxlen field.

Each transport provider manages its address space differently. Some transport providers allow a single transport address to be bound to several transport endpoints, while others require a unique address per endpoint. XTI and TLI differ in some significant ways in providing the address binding.

In TLI, based on its rules, a provider determines if it can bind the requested address. If not, it chooses another valid address from its address space and binds it to the transport endpoint. The application program must check the bound address to ensure that it is the one previously advertised to clients. In XTI, if the provider determines it cannot bind to the requested address, it fails the t_bind(3NSL) request with an error.

If t_bind(3NSL) succeeds, the provider begins queueing connect requests, entering the next phase of communication.

Connection Establishment

XTI/TLI imposes different procedures in this phase for clients and servers. The client starts connection establishment by requesting a connection to a specified server using t_connect(3NSL). The server receives a client's request by calling t_listen(3NSL). The server must accept or reject the client's request. It calls t_accept(3NSL) to establish the connection, or t_snddis(3NSL) to reject the request. The client is notified of the result when t_connect(3NSL) returns.

TLI supports two facilities during connection establishment that might not be supported by all transport providers:

These facilities produce protocol-dependent software (see "Guidelines to Protocol Independence").

Client

The steps for the client to establish a connection are shown in Example 3-5.


Example 3-5 Client-to-Server Connection

if ((sndcall = (struct t_call *) t_alloc(fd, T_CALL, T_ADDR))
      == (struct t_call *) NULL) {
   t_error("t_alloc failed");
   exit(3);
}

/*
 * Because it assumes it knows the format of the provider's
 * address, this program is transport-dependent
 */
sndcall->addr.len = sizeof(int);
*(int *) sndcall->addr.buf = SRV_ADDR;
if (t_connect( fd, sndcall, (struct t_call *) NULL) == -1 ) {
   t_error("t_connect failed for fd");
   exit(4);
}

The t_connect(3NSL) call connects to the server. The first argument of t_connect(3NSL) identifies the client's endpoint, and the second argument points to a t_call structure that identifies the destination server. This structure has the following format:

struct t_call {
 	struct netbuf addr;
 	struct netbuf opt;
 	struct netbuf udata;
 	int sequence;
}

addr identifies the address of the server, opt specifies protocol-specific options to the connection, and udata identifies user data that can be sent with the connect request to the server. The sequence field has no meaning for t_connect(3NSL). In this example, only the server's address is passed.

t_alloc(3NSL) allocates the t_call structure dynamically. The third argument of t_alloc(3NSL) is T_ADDR, which specifies that the system needs to allocate a netbuf buffer. The server's address is then copied to buf, and len is set accordingly.

The third argument of t_connect(3NSL) can be used to return information about the newly established connection, and can return any user data sent by the server in its response to the connect request. The third argument here is set to NULL by the client. The connection is established on successful return of t_connect(3NSL). If the server rejects the connect request, t_connect(3NSL) sets t_errnoto TLOOK.

Event Handling

The TLOOK error has special significance. TLOOK is set if an XTI/TLI routine is interrupted by an unexpected asynchronous transport event on the endpoint. TLOOK does not report an error with an XTI/TLI routine, but the normal processing of the routine is not done because of the pending event. The events defined by XTI/TLI are listed in Table 3-7.

Table 3-7 Asynchronous Endpoint Events

Name 

Description 

T_LISTEN

Connection request arrived at the transport endpoint 

T_CONNECT

Confirmation of a previous connect request arrived (generated when a server accepts a connect request)  

T_DATA

User data has arrived 

T_EXDATA

Expedited user data arrived 

T_DISCONNECT

Notice that an aborted connection or a rejected connect request arrived 

T_ORDREL

A request for orderly release of a connection arrived 

T_UDERR

Notice of an error in a previous datagram arrived. (See "Read/Write Interface".)

The state table in "State Transitions" shows which events can happen in each state. t_look(3NSL) lets a user determine what event has occurred if a TLOOK error is returned. In the example, if a connect request is rejected, the client exits.

Server

When the client calls t_connect(3NSL), a connect request is sent at the server's transport endpoint. For each client, the server accepts the connect request and spawns a process to service the connection.

if ((call = (struct t_call *) t_alloc(listen_fd, T_CALL, T_ALL))
      == (struct t_call *) NULL) {
   t_error("t_alloc of t_call structure failed");
   exit(5);
}
while(1) {
   if (t_listen( listen_fd, call) == -1) {
      t_error("t_listen failed for listen_fd");
      exit(6);
   }
   if ((conn_fd = accept_call(listen_fd, call)) != DISCONNECT)
      run_server(listen_fd);
}

The server allocates a t_call structure, then does a closed loop. The loop blocks on t_listen(3NSL) for a connect request. When a request arrives, the server calls accept_call() to accept the connect request. accept_call accepts the connection on an alternate transport endpoint (as discussed below) and returns the handle of that endpoint. (conn_fd is a global variable.) Because the connection is accepted on an alternate endpoint, the server can continue to listen on the original endpoint. If the call is accepted without error, run_server spawns a process to service the connection.

XTI/TLI supports an asynchronous mode for these routines that prevents a process from blocking. See "Advanced Topics".

When a connect request arrives, the server calls accept_call to accept the client's request, as Example 3-6 shows.


Note -

It is implicitly assumed that this server only needs to handle a single connection request at a time. This is not normally true of a server. The code required to handle multiple simultaneous connection requests is complicated because of XTI/TLI event mechanisms. See "Advanced Programming Example" for such a server.



Example 3-6 accept_call Function

accept_call(listen_fd, call)
int listen_fd;
struct t_call *call;
{
   int resfd;

   if ((resfd = t_open("/dev/exmp", O_RDWR, (struct t_info *) NULL))
         == -1) {
      t_error("t_open for responding fd failed");
      exit(7);
 	}
   if (t_bind(resfd,(struct t_bind *) NULL, (struct t_bind *NULL))
         == -1) {
      t_error("t_bind for responding fd failed");
      exit(8);
   }
   if (t_accept(listen_fd, resfd, call) == -1) {
      if (t_errno == TLOOK) {								/* must be a disconnect */
         if (t_rcvdis(listen_fd,(struct t_discon *) NULL) == -1) {
            t_error("t_rcvdis failed for listen_fd");
            exit(9);
         }
         if (t_close(resfd) == -1) {
            t_error("t_close failed for responding fd");
            exit(10);
         }
         /* go back up and listen for other calls */
         return(DISCONNECT);
      }
      t_error("t_accept failed");
      exit(11);
   }
   return(resfd);
}

accept_call() has two arguments:

listen_fd The file handle of the transport endpoint where the connect request arrived.
callPoints to a t_call structure that contains all information associated with the connect request

The server first opens another transport endpoint by opening the clone device special file of the transport provider and binding an address. A NULL specifies not to return the address bound by the provider. The new transport endpoint, resfd, accepts the client's connect request.

The first two arguments of t_accept(3NSL) specify the listening transport endpoint and the endpoint where the connection is accepted, respectively. Accepting a connection on the listening endpoint prevents other clients from accessing the server for the duration of the connection.

The third argument of t_accept(3NSL) points to the t_call structure containing the connect request. This structure should contain the address of the calling user and the sequence number returned by t_listen(3NSL). The sequence number is significant if the server queues multiple connect requests. The "Advanced Topics" shows an example of this. The t_call structure also identifies protocol options and user data to pass to the client. Because this transport provider does not support protocol options or the transfer of user data during connection, the t_call structure returned by t_listen(3NSL) is passed without change to t_accept(3NSL).

The example is simplified. The server exits if either the t_open(3NSL) or t_bind(3NSL) call fails. exit(2) closes the transport endpoint of listen_fd, causing a disconnect request to be sent to the client. The client's t_connect(3NSL) call fails, setting t_errno to TLOOK.

t_accept(3NSL) can fail if an asynchronous event occurs on the listening endpoint before the connection is accepted, and t_errno is set to TLOOK. Table 3-8 shows that only a disconnect request can be sent in this state with only one queued connect request. This event can happen if the client undoes a previous connect request. If a disconnect request arrives, the server must respond by calling t_rcvdis(3NSL). This routine argument is a pointer to a t_discon structure, which is used to retrieve the data of the disconnect request. In this example, the server passes a NULL.

After receiving a disconnect request, accept_call closes the responding transport endpoint and returns DISCONNECT, which informs the server that the connection was disconnected by the client. The server then listens for further connect requests.

Figure 3-4 illustrates how the server establishes connections:

Figure 3-4 Listening and Responding Transport Endpoints

Graphic

The transport connection is established on the new responding endpoint, and the listening endpoint is freed to retrieve further connect requests.

Data Transfer

After the connection is established, both the client and the server can transfer data through the connection using t_snd(3NSL) and t_rcv(3NSL). XTI/TLI does not differentiate the client from the server from this point on. Either user can send data, receive data, or release the connection.

The two classes of data on a transport connection are:

  1. Normal data

  2. Expedited data

Expedited data is for urgent data. The exact semantics of expedited data vary between transport providers. Not all transport protocols support expedited data (see t_open(3NSL)).

Most connection-oriented mode protocols transfer data in byte streams. "Byte stream" implies no message boundaries in data sent over a connection. Some transport protocols preserve message boundaries over a transport connection. This service is supported by XTI/TLI, but protocol-independent software must not rely on it.

The message boundaries are invoked by the T_MORE flag of t_snd(3NSL) and t_rcv(3NSL). The messages, called transport service data units (TSDU), can be transferred between two transport users as distinct units. The maximum message size is defined by the underlying transport protocol. Get the message size through t_open(3NSL) or t_getinfo(3NSL).

You can send a message in multiple units. Set the T_MORE flag on every t_snd(3NSL) call, except the last to send a message in multiple units. The flag specifies that the data in the current and the next t_snd(3NSL) calls are a logical unit. Send the last message unit with T_MORE turned off to specify the end of the logical unit.

Similarly, a logical unit can be sent in multiple units. If t_rcvv(3NSL) returns with the T_MORE flag set, the user must call t_rcvv(3NSL) again to receive the rest of the message. The last unit in the message is identified by a call to t_rcvv(3NSL) that does not set T_MORE.

The T_MORE flag implies nothing about how the data is packaged below XTI/TLI or how the data is delivered to the remote user. Each transport protocol, and each implementation of a protocol, can package and deliver the data differently.

For example, if a user sends a complete message in a single call to t_snd(3NSL)t_snd, there is no guarantee that the transport provider delivers the data in a single unit to the receiving user. Similarly, a message transmitted in two units can be delivered in a single unit to the remote transport user.

If supported by the transport, the message boundaries are preserved only by setting the value of T_MORE for t_snd(3NSL) and testing it after t_rcvv(3NSL). This guarantees that the receiver sees a message with the same contents and message boundaries as was sent.

Client

The example server transfers a log file to the client over the transport connection. The client receives the data and writes it to its standard output file. A byte stream interface is used by the client and server, with no message boundaries. The client receives data by the following:

while ((nbytes = t_rcv(fd, buf, nbytes, &flags))!= -1){
   if (fwrite(buf, 1, nbytes, stdout) == -1) {
      fprintf(stderr, "fwrite failed\n");
      exit(5);
   }
}

The client repeatedly calls t_rcvv(3NSL) to receive incoming data. t_rcvv(3NSL) blocks until data arrives. t_rcvv(3NSL) writes up to nbytes of the data available into buf and returns the number of bytes buffered. The client writes the data to standard output and continues. The data transfer loop ends when t_rcvv(3NSL) fails. t_rcvv(3NSL) fails when an orderly release or disconnect request arrives. If fwrite(3C) fails for any reason, the client exits, which closes the transport endpoint. If the transport endpoint is closed (either by exit(2) or t_close(3NSL)) during data transfer, the connection is aborted and the remote user receives a disconnect request.

Server

The server manages its data transfer by spawning a child process to send the data to the client. The parent process continues the loop to listen for more connect requests. run_server is called by the server to spawn this child process, as shown in Example 3-7.


Example 3-7 Spawning Child Process to Loopback and Listen

connrelease()
{
   /* conn_fd is global because needed here */
   if (t_look(conn_fd) == T_DISCONNECT) {
      fprintf(stderr, "connection aborted\n");
      exit(12);
   }
   /* else orderly release request - normal exit */
   exit(0);
}
run_server(listen_fd)
int listen_fd;
{
   int nbytes;
   FILE *logfp;                    /* file pointer to log file */
   char buf[1024];

   switch(fork()) {
   case -1:
      perror("fork failed");
      exit(20);
   default:									/* parent */
      /* close conn_fd and then go up and listen again*/
      if (t_close(conn_fd) == -1) {
         t_error("t_close failed for conn_fd");
         exit(21);
      }
      return;
   case 0:                        /* child */
      /* close listen_fd and do service */
      if (t_close(listen_fd) == -1) {
         t_error("t_close failed for listen_fd");
         exit(22);
      }
      if ((logfp = fopen("logfile", "r")) == (FILE *) NULL) {
         perror("cannot open logfile");
         exit(23);
      }
      signal(SIGPOLL, connrelease);
      if (ioctl(conn_fd, I_SETSIG, S_INPUT) == -1) {
         perror("ioctl I_SETSIG failed");
         exit(24);
      }
      if (t_look(conn_fd) != 0){      /*disconnect there?*/
         fprintf(stderr, "t_look: unexpected event\n");
         exit(25);
      }
      while ((nbytes = fread(buf, 1, 1024, logfp)) > 0)
         if (t_snd(conn_fd, buf, nbytes, 0) == -1) {
            t_error("t_snd failed");
            exit(26);
         }

After the fork, the parent process returns to the main listening loop. The child process manages the newly established transport connection. If the fork fails, exit(2) closes both transport endpoints, sending a disconnect request to the client, and the client's t_connect(3NSL) call fails.

The server process reads 1024 bytes of the log file at a time and sends the data to the client using t_snd(3NSL). buf points to the start of the data buffer, and nbytes specifies the number of bytes to transmit. The fourth argument can be zero or one of the two optional flags below:

Neither flag is set by the server in this example.

If the user floods the transport provider with data, t_snd(3NSL) blocks until enough data is removed from the transport.

t_snd(3NSL) does not look for a disconnect request (showing that the connection was broken). If the connection is aborted, the server should be notified, since data can be lost. One solution is to call t_look(3NSL) to check for incoming events before each t_snd(3NSL) call or after a t_snd(3NSL) failure. The example has a cleaner solution. The I_SETSIG ioctl(2) lets a user request a signal when a specified event occurs. See the streamio(7I) manpage. S_INPUT causes a signal to be sent to the user process when any input arrives at the endpoint conn_fd. If a disconnect request arrives, the signal-catching routine (connrelease) prints an error message and exits.

If the server alternates t_snd(3NSL) and t_rcv(3NSL) calls, it can use t_rcv(3NSL) to recognize an incoming disconnect request.

Connection Release

At any time during data transfer, either user can release the transport connection and end the conversation. There are two forms of connection release.

See "Transport Selection" for information on how to select a transport that supports orderly release.

Server

This example assumes that the transport provider supports orderly release. When all the data has been sent by the server, the connection is released as follows:

if (t_sndrel(conn_fd) == -1) {
   t_error("t_sndrel failed");
   exit(27);
}
pause(); /* until orderly release request arrives */

Orderly release requires two steps by each user. The server can call t_sndrel(3NSL). This routine sends a disconnect request. When the client receives the request, it can continue sending data back to the server. When all data have been sent, the client calls t_sndrel(3NSL) to send a disconnect request back. The connection is released only after both users have received a disconnect request.

In this example, data is transferred only from the server to the client. So there is no provision to receive data from the client after the server initiates release. The server calls pause(2) after initiating the release.

The client responds with its orderly release request, which generates a signal caught by connrelease(). (In Example 3-7, the server issued an I_SETSIG ioctl(2) to generate a signal on any incoming event.) The only XTI/TLI event possible in this state is a disconnect request or an orderly release request, so connrelease exits normally when the orderly release request arrives. exit(2) from connrelease closes the transport endpoint and frees the bound address. To close a transport endpoint without exiting, call t_close(3NSL).

Client

The client releases the connection similar to the way the server releases it. The client processes incoming data until t_rcv(3NSL) fails. When the server releases the connection (using either t_snddis(3NSL) or t_sndrel(3NSL)), t_rcv(3NSL) fails and sets t_errno to TLOOK. The client then processes the connection release as follows:

if ((t_errno == TLOOK) && (t_look(fd) == T_ORDREL)) {
   if (t_rcvrel(fd) == -1) {
      t_error("t_rcvrel failed");
      exit(6);
 	}
 	if (t_sndrel(fd) == -1) {
      t_error("t_sndrel failed");
      exit(7);
 	}
 	exit(0);
 }

Each event on the client's transport endpoint is checked for an orderly release request. When one is received, the client calls t_rcvrel(3NSL) to process the request and t_sndrel(3NSL) to send the response release request. The client then exits, closing its transport endpoint.

If a transport provider does not support the orderly release, use abortive release with t_snddis(3NSL) and t_rcvdis(3NSL). Each user must take steps to prevent data loss. For example, use a special byte pattern in the data stream to indicate the end of a conversation.

Read/Write Interface

A user might want to establish a transport connection using exec(2) on an existing program (such as /usr/bin/cat) to process the data as it arrives over the connection. Existing programs use read(2) and write(2). XTI/TLI does not directly support a read/write interface to a transport provider, but one is available. The interface lets you issue read(2) and write(2) calls over a transport connection in the data transfer phase. This section describes the read/write interface to the connection mode service of XTI/TLI. This interface is not available with the connectionless mode service.

The read/write interface is presented using the client example (with modifications) of "Connection Mode Service". The clients are identical until the data transfer phase. Then the client uses the read/write interface and cat(1) to process incoming data. cat(1) is run without change over the transport connection. Only the differences between this client and that of the client in Example 3-3are shown in Example 3-8.


Example 3-8 Read/Write Interface

#include <stropts.h>
   .
   ./*
     Same local management and connection establishment steps.
     */
   .
   if (ioctl(fd, I_PUSH, "tirdwr") == -1) {
      perror("I_PUSH of tirdwr failed");
      exit(5);
 	}
   close(0);
   dup(fd);
   execl("/usr/bin/cat", "/usr/bin/cat", (char *) 0);
   perror("exec of /usr/bin/cat failed");
   exit(6);
}

The client invokes the read/write interface by pushing tirdwr onto the stream associated with the transport endpoint. See I_PUSH in streamio(7I). tirdwr converts XTI/TLI above the transport provider into a pure read/write interface. With the module in place, the client calls close(2) and dup(2) to establish the transport endpoint as its standard input file, and uses /usr/bin/cat to process the input.

By pushing tirdwr onto the transport provider, XTI/TLI is changed. The semantics of read(2) and write(2) must be used, and message boundaries are not preserved. tirdwr can be popped from the transport provider to restore XTI/TLI semantics (see I_POP in streamio(7I).


Caution - Caution -

The tirdwr module can only be pushed onto a stream when the transport endpoint is in the data transfer phase. After the module is pushed, the user cannot call any XTI/TLI routines. If an XTI/TLI routine is invoked, tirdwr generates a fatal protocol error, EPROTO, on the stream, rendering it unusable. If you then pop the tirdwr module off the stream, the transport connection is aborted. See I_POP in streamio(7I).


Write

Send data over the transport connection with write(2). tirdwr passes data through to the transport provider. If you send a zero-length data packet, which the mechanism allows, tirdwr discards the message. If the transport connection is aborted--for example, because the remote user aborts the connection using t_snddis(3NSL)--a hang-up condition is generated on the stream, further write(2) calls fail, and errno is set to ENXIO. You can still retrieve any available data after a hang-up.

Read

Receive data that arrives at the transport connection with read(2). tirdwr, which passes data from the transport provider. Any other event or request passed to the user from the provider is processed by tirdwr as follows:

Close

With tirdwr on a stream, you can send and receive data over a transport connection for the duration of the connection. Either user can terminate the connection by closing the file descriptor associated with the transport endpoint or by popping the tirdwr module off the stream. In either case, tirdwr does the following:

A process cannot initiate an orderly release after tirdwr is pushed onto a stream. tirdwr handles an orderly release if it is initiated by the user on the other side of a transport connection. If the client in this section is communicating with the server program in "Connection Mode Service", the server terminates the transfer of data with an orderly release request. The server then waits for the corresponding request from the client. At that point, the client exits and the transport endpoint is closed. When the file descriptor is closed, tirdwr initiates the orderly release request from the client's side of the connection. This generates the request that the server is blocked on.

Some protocols, like TCP, require this orderly release to ensure that the data is delivered intact.

Advanced Topics

This section presents additional XTI/TLI concepts:

Asynchronous Execution Mode

Many XTI/TLI library routines block to wait for an incoming event. However, some time-critical applications should not block for any reason. An application can do local processing while waiting for some asynchronous XTI/TLI event.

Asynchronous processing of XTI/TLI events is available to applications through the combination of asynchronous features and the non-blocking mode of XTI/TLI library routines. Use of the poll(2) system call and the I_SETSIG ioctl(2) command to process events asynchronously is described in ONC+ Developer's Guide.

Each XTI/TLI routine that blocks for an event can be run in a special non-blocking mode. For example, t_listen(3NSL) normally blocks for a connect request. A server can periodically poll a transport endpoint for queued connect requests by calling t_listen(3NSL) in the non-blocking (or asynchronous) mode. The asynchronous mode is enabled by setting O_NDELAY or O_NONBLOCK in the file descriptor. These modes can be set as a flag through t_open(3NSL), or by calling fcntl(2) before calling the XTI/TLI routine. fcntl(2) enables or disables this mode at any time. All program examples in this chapter use the default synchronous processing mode.

O_NDELAY or O_NONBLOCK affect each XTI/TLI routine differently. You will need to determine the exact semantics of O_NDELAY or O_NONBLOCK for a particular routine.

Advanced Programming Example

The following example demonstrates two important concepts. The first is a server's ability to manage multiple outstanding connect requests. The second is event-driven use of XTI/TLI and the system call interface.

The server example in Example 3-4 supports only one outstanding connect request, but XTI/TLI lets a server manage multiple outstanding connect requests. One reason to receive several simultaneous connect requests is to prioritize the clients. A server can receive several connect requests, and accept them in an order based on the priority of each client.

The second reason for handling several outstanding connect requests is the limits of single-threaded processing. Depending on the transport provider, while a server processes one connect request, other clients find it busy. If multiple connect requests are processed simultaneously, the server will be found busy only if more than the maximum number of clients try to call the server simultaneously.

The server example is event-driven: the process polls a transport endpoint for incoming XTI/TLI events, and takes the appropriate actions for the event received. The example demonstrates the ability to poll multiple transport endpoints for incoming events.

The definitions and endpoint establishment functions of Example 3-9 are similar to those of the server example in Example 3-4.


Example 3-9 Endpoint Establishment (Convertible to Multiple Connections)

#include <tiuser.h>
#include <fcntl.h>
#include <stdio.h>
#include <poll.h>
#include <stropts.h>
#include <signal.h>

#define NUM_FDS 1
#define MAX_CONN_IND 4
#define SRV_ADDR 1                 /* server's well known address */

int conn_fd;                       /* server connection here */
extern int t_errno;
/* holds connect requests */
struct t_call *calls[NUM_FDS][MAX_CONN_IND];

main()
{
   struct pollfd pollfds[NUM_FDS];
   struct t_bind *bind;
   int i;

   /*
    * Only opening and binding one transport endpoint, but more can
    * be supported
    */
   if ((pollfds[0].fd = t_open("/dev/tivc", O_RDWR,
         (struct t_info *) NULL)) == -1) {
      t_error("t_open failed");
      exit(1);
   }
   if ((bind = (struct t_bind *) t_alloc(pollfds[0].fd, T_BIND,
         T_ALL)) == (struct t_bind *) NULL) {
      t_error("t_alloc of t_bind structure failed");
      exit(2);
   }
   bind->qlen = MAX_CONN_IND;
   bind->addr.len = sizeof(int);
   *(int *) bind->addr.buf = SRV_ADDR;
   if (t_bind(pollfds[0].fd, bind, bind) == -1) {
      t_error("t_bind failed");
      exit(3);
   }
   /* Was the correct address bound? */
   if (bind->addr.len != sizeof(int) ||
      *(int *)bind->addr.buf != SRV_ADDR) {
      fprintf(stderr, "t_bind bound wrong address\n");
      exit(4);
   }
}

The file descriptor returned by t_open(3NSL) is stored in a pollfd structure that controls polling the transport endpoints for incoming data. See poll(2). Only one transport endpoint is established in this example. However, the remainder of the example is written to manage multiple transport endpoints. Several endpoints could be supported with minor changes to Example 3-9.

This server sets qlen to a value greater than 1 for t_bind(3NSL). This specifies that the server queues multiple outstanding connect requests. The server accepts the current connect request before accepting additional connect requests. This example can queue up to MAX_CONN_IND connect requests. The transport provider can negotiate the value of qlen smaller if it cannot support MAX_CONN_IND outstanding connect requests.

After the server has bound its address and is ready to process connect requests, it behaves as shown in Example 3-10.


Example 3-10 Processing Connection Requests

pollfds[0].events = POLLIN;

while (TRUE) {
	if (poll(pollfds, NUM_FDS, -1) == -1) {
   perror("poll failed");
   exit(5);
	}
	for (i = 0; i < NUM_FDS; i++) {
   switch (pollfds[i].revents) {
      default:
         perror("poll returned error event");
      exit(6);
      case 0:
         continue;
      case POLLIN:
         do_event(i, pollfds[i].fd);
         service_conn_ind(i, pollfds[i].fd);
   	}
   }
}

The events field of the pollfd structure is set to POLLIN, which notifies the server of any incoming XTI/TLI events. The server then enters an infinite loop in which it polls the transport endpoint(s) for events, and processes events as they occur.

The poll(2) call blocks indefinitely for an incoming event. On return, each entry (one per transport endpoint) is checked for a new event. If revents is 0, no event has occurred on the endpoint and the server continues to the next endpoint. If revents is POLLIN, there is an event on the endpoint. do_event is called to process the event. Any other value in revents indicates an error on the endpoint, and the server exits. With multiple endpoints, it is better for the server to close this descriptor and continue.

For each iteration of the loop, service_conn_ind is called to process any outstanding connect requests. If another connect request is pending, service_conn_ind saves the new connect request and responds to it later.

The do_event in Example 3-11 is called to process an incoming event.


Example 3-11 Event Processing Routine

do_event( slot, fd)
int slot;
int fd;
{
   struct t_discon *discon;
   int i;

   switch (t_look(fd)) {
   default:
      fprintf(stderr, "t_look: unexpected event\n");
      exit(7);
   case T_ERROR:
      fprintf(stderr, "t_look returned T_ERROR event\n");
      exit(8);
   case -1:
      t_error("t_look failed");
      exit(9);
   case 0:
      /* since POLLIN returned, this should not happen */
      fprintf(stderr,"t_look returned no event\n");
      exit(10);
   case T_LISTEN:
      /* find free element in calls array */
      for (i = 0; i < MAX_CONN_IND; i++) {
         if (calls[slot][i] == (struct t_call *) NULL)
            break;
      }
      if ((calls[slot][i] = (struct t_call *) t_alloc( fd, T_CALL,
               T_ALL)) == (struct t_call *) NULL) {
         t_error("t_alloc of t_call structure failed");
         exit(11);
      }
      if (t_listen(fd, calls[slot][i] ) == -1) {
         t_error("t_listen failed");
         exit(12);
      }
      break;
   case T_DISCONNECT:
      discon = (struct t_discon *) t_alloc(fd, T_DIS, T_ALL);
      if (discon == (struct t_discon *) NULL) {
         t_error("t_alloc of t_discon structure failed");
         exit(13)
      }
      if(t_rcvdis( fd, discon) == -1) {
         t_error("t_rcvdis failed");
         exit(14);
      }
      /* find call ind in array and delete it */
      for (i = 0; i < MAX_CONN_IND; i++) {
         if (discon->sequence == calls[slot][i]->sequence) {
            t_free(calls[slot][i], T_CALL);
            calls[slot][i] = (struct t_call *) NULL;
         }
      }
      t_free(discon, T_DIS);
      break;
   }
}

The arguments are a number (slot) and a file descriptor (fd). slot is the index into the global array calls which has an entry for each transport endpoint. Each entry is an array of t_call structures that hold incoming connect requests for the endpoint.

do_event calls t_look(3NSL) to identify the XTI/TLI event on the endpoint specified by fd. If the event is a connect request (T_LISTEN event) or disconnect request (T_DISCONNECT event), the event is processed. Otherwise, the server prints an error message and exits.

For connect requests, do_event scans the array of outstanding connect requests for the first free entry. A t_call structure is allocated for the entry, and the connect request is received by t_listen(3NSL). The array is large enough to hold the maximum number of outstanding connect requests. The processing of the connect request is deferred.

A disconnect request must correspond to an earlier connect request. do_event allocates a t_discon structure to receive the request. This structure has the following fields:

struct t_discon {
 	struct netbuf udata;
 	int reason;
 	int sequence;
}

udata contains any user data sent with the disconnect request. reason contains a protocol-specific disconnect reason code. sequence identifies the connect request that matches the disconnect request.

t_rcvdis(3NSL) is called to receive the disconnect request. The array of connect requests is scanned for one that contains the sequence number that matches the sequence number in the disconnect request. When the connect request is found, its structure is freed and the entry is set to NULL.

When an event is found on a transport endpoint, service_conn_ind is called to process all queued connect requests on the endpoint, as Example 3-12 shows.


Example 3-12 Process All Connect Requests

service_conn_ind(slot, fd)
{
   int i;

	for (i = 0; i < MAX_CONN_IND; i++) {
      if (calls[slot][i] == (struct t_call *) NULL)
         continue;
      if((conn_fd = t_open( "/dev/tivc", O_RDWR,
            (struct t_info *) NULL)) == -1) {
         t_error("open failed");
         exit(15);
      }
      if (t_bind(conn_fd, (struct t_bind *) NULL,
            (struct t_bind *) NULL) == -1) {
         t_error("t_bind failed");
         exit(16);
      }
      if (t_accept(fd, conn_fd, calls[slot][i]) == -1) {
         if (t_errno == TLOOK) {
            t_close(conn_fd);
            return;
         }
         t_error("t_accept failed");
         exit(167);
      }
      t_free(calls[slot][i], T_CALL);
      calls[slot][i] = (struct t_call *) NULL;
      run_server(fd);
   }
}

For each transport endpoint, the array of outstanding connect requests is scanned. For each request, the server opens a responding transport endpoint, binds an address to the endpoint, and accepts the connection on the endpoint. If another event (connect request or disconnect request) arrives before the current request is accepted, t_accept(3NSL) fails and sets t_errno to TLOOK. (You cannot accept an outstanding connect request if any pending connect request events or disconnect request events exist on the transport endpoint.)

If this error occurs, the responding transport endpoint is closed and service_conn_ind returns immediately (saving the current connect request for later processing). This causes the server's main processing loop to be entered, and the new event is discovered by the next call to poll(2). In this way, multiple connect requests can be queued by the user.

Eventually, all events are processed, and service_conn_ind is able to accept each connect request in turn. After the connection has been established, the run_server routine used by the server in the Example 3-5 is called to manage the data transfer.

Asynchronous Networking

This section discusses the techniques of asynchronous network communication using XTI/TLI for real-time applications. SunOS provides support for asynchronous network processing of XTI/TLI events using a combination of STREAMS asynchronous features and the non-blocking mode of the XTI/TLI library routines.

Networking Programming Models

Like file and device I/O, network transfers can be done synchronously or asynchronously with process service requests.

Synchronous Networking

Synchronous networking proceeds similar to synchronous file and device I/O. Like the write(2) function, the request to send returns after buffering the message, but might suspend the calling process if buffer space is not immediately available. Like the read(2) function, a request to receive suspends execution of the calling process until data arrives to satisfy the request. Because SunOS provides no guaranteed bounds for transport services, synchronous networking is inappropriate for processes that must have real-time behavior with respect to other devices.

Asynchronous Networking

Asynchronous networking is provided by non-blocking service requests. Additionally, applications can request asynchronous notification when a connection might be established, when data might be sent, or when data might be received.

Asynchronous Connectionless-Mode Service

Asynchronous connectionless mode networking is conducted by configuring the endpoint for non-blocking service, and either polling for or receiving asynchronous notification when data might be transferred. If asynchronous notification is used, the actual receipt of data typically takes place within a signal handler.

Making the Endpoint Asynchronous

After the endpoint has been established using t_open(3NSL), and its identity established using t_bind(3NSL), the endpoint can be configured for asynchronous service. This is done by using the fcntl(2) function to set the O_NONBLOCK flag on the endpoint. Thereafter, calls to t_sndudata(3NSL) for which no buffer space is immediately available return -1 with t_errno set to TFLOW. Likewise, calls to t_rcvudata(3NSL) for which no data are available return -1 with t_errno set to TNODATA.

Asynchronous Network Transfers

Although an application can use the poll(2) function to check periodically for the arrival of data or to wait for the receipt of data on an endpoint, it might be necessary to receive asynchronous notification when data has arrived. This can be done by using the ioctl(2) function with the I_SETSIG command to request that a SIGPOLL signal be sent to the process upon receipt of data at the endpoint. Applications should check for the possibility of multiple messages causing a single signal.

In the following example, protocol is the name of the application-chosen transport protocol.


#include <sys/types.h>
#include <tiuser.h>
#include <signal.h>
#include <stropts.h>

int				fd;
struct t_bind				*bind;
void				sigpoll(int);

	fd = t_open(protocol, O_RDWR, (struct t_info *) NULL);

	bind = (struct t_bind *) t_alloc(fd, T_BIND, T_ADDR);
	...     /* set up binding address */
	t_bind(fd, bind, bin

	/* make endpoint non-blocking */
	fcntl(fd, F_SETFL, fcntl(fd, F_GETFL) | O_NONBLOCK);

	/* establish signal handler for SIGPOLL */
	signal(SIGPOLL, sigpoll);

	/* request SIGPOLL signal when receive data is available */
	ioctl(fd, I_SETSIG, S_INPUT | S_HIPRI);

	...

void sigpoll(int sig)
{
	int					flags;
	struct t_unitdata					ud;

	for (;;) {
		... /* initialize ud */
		if (t_rcvudata(fd, &ud, &flags) < 0) {
			if (t_errno == TNODATA)
				break;  /* no more messages */
			... /* process other error conditions */
	}
	... /* process message in ud */
}

Asynchronous Connection-Mode Service

For connection-mode service, an application can arrange for not only the data transfer, but for the establishment of the connection itself to be done asynchronously. The sequence of operations depends on whether the process is attempting to connect to another process or is awaiting connection attempts.

Asynchronously Establishing a Connection

A process can attempt a connection and asynchronously complete the connection. The process first creates the connecting endpoint, and, using fcntl(2), configures the endpoint for non-blocking operation. As with connectionless data transfers, the endpoint can also be configured for asynchronous notification upon completion of the connection and subsequent data transfers. The connecting process then uses the t_connect(3NSL) function to initiate setting up the transfer. Then the t_rcvconnect(3NSL) function is used to confirm the establishment of the connection.

Asynchronous Use of a Connection

To asynchronously await connections, a process first establishes a non-blocking endpoint bound to a service address. When either the result of poll(2) or an asynchronous notification indicates that a connection request has arrived, the process can get the connection request by using the t_listen(3NSL) function. To accept the connection, the process uses the t_accept(3NSL) function. The responding endpoint must be separately configured for asynchronous data transfers.

The following example illustrates how to request a connection asynchronously.


#include <tiuser.h>
int             fd;
struct t_call   *call;

	fd = .../* establish a non-blocking endpoint */

	call = (struct t_call *) t_alloc(fd, T_CALL, T_ADDR);
	.../* initialize call structure */
	t_connect(fd, call, call);

	/* connection request is now proceeding asynchronously */

	.../* receive indication that connection has been accepted */
	t_rcvconnect(fd, &call);

The following example illustrates listening for connections asynchronously.


#include <tiuser.h>
int             fd, res_fd;
struct t_call   call;

	fd = ... /* establish non-blocking endpoint */

	.../*receive indication that connection request has arrived
*/
	call = (struct t_call *) t_alloc(fd, T_CALL, T_ALL);
	t_listen(fd, &call);

	.../* determine whether or not to accept connection */
	res_fd = ... /* establish non-blocking endpoint for response
*/
	t_accept(fd, res_fd, call);

Asynchronous Open

Occasionally, an application might be required to dynamically open a regular file in a file system mounted from a remote host, or on a device whose initialization might be prolonged. However, while such an open is in progress, the application is unable to achieve real-time response to other events. Fortunately, SunOS provides a means of solving this problem by having a second process perform the actual open and then pass the file descriptor to the real-time process.

Transferring a File Descriptor

The STREAMS interface under SunOS provides a mechanism for passing an open file descriptor from one process to another. The process with the open file descriptor uses the ioctl(2) function with a command argument of I_SENDFD. The second process obtains the file descriptor by calling ioctl(2) with a command argument of I_RECVFD.

In this example, the parent process prints out information about the test file, and creates a pipe. Next, the parent creates a child process, which opens the test file, and passes the open file descriptor back to the parent through the pipe. The parent process then displays the status information on the new file descriptor.


Example 3-13 File Descriptor Transfer

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stropts.h>
#include <stdio.h>

#define TESTFILE "/dev/null"
main(int argc, char *argv[])
{
	int fd;
	int pipefd[2];
	struct stat statbuf;

	stat(TESTFILE, &statbuf);
	statout(TESTFILE, &statbuf);
	pipe(pipefd);
	if (fork() == 0) {
		close(pipefd[0]);
		sendfd(pipefd[1]);
	} else {
		close(pipefd[1])
		recvfd(pipefd[0]);
	}
}

sendfd(int p)
{
	int tfd;

	tfd = open(TESTFILE, O_RDWR);
	ioctl(p, I_SENDFD, tfd);
}

recvfd(int p)
{
	struct strrecvfd rfdbuf;
	struct stat statbuf;
	char			fdbuf[32];

	ioctl(p, I_RECVFD, &rfdbuf);
	fstat(rfdbuf.fd, &statbuf);
	sprintf(fdbuf, "recvfd=%d", rfdbuf.fd);
	statout(fdbuf, &statbuf);	
}

statout(char *f, struct stat *s)
{
	printf("stat: from=%s mode=0%o, ino=%ld, dev=%lx, rdev=%lx\n",
		f, s->st_mode, s->st_ino, s->st_dev, s->st_rdev);
	fflush(stdout);
}

State Transitions

These tables describe all state transitions associated with XTI/TLI. First, however, the states and events are described.

XTI/TLI States

Table 3-8 defines the states used in XTI/TLI state transitions, along with the service types.

Table 3-8 XTI/TLI State Transitions and Service Types

State 

Description 

Service Type 

T_UNINIT

Uninitialized - initial and final state of interface 

T_COTS, T_COTS_ORD, T_CLTS

T_UNBND

Initialized but not bound 

T_COTS, T_COTS_ORD, T_CLTS

T_IDLE

No connection established 

T_COTS, T_COTS_ORD, T_CLTS

T_OUTCON

Outgoing connection pending for client 

T_COTS, T_COTS_ORD

T_INCON

Incoming connection pending for server 

T_COTS, T_COTS_ORD

T_DATAXFER

Data transfer 

T_COTS, T_COTS_ORD

T_OUTREL

Outgoing orderly release (waiting for orderly release request 

T_COTS_ORD

T_INREL

Incoming orderly release (waiting to send orderly release request) 

T_COTS_ORD

Outgoing Events

The outgoing events described in Table 3-9 correspond to the status returned from the specified transport routines, where these routines send a request or response to the transport provider. In the table, some events, such as 'accept', are distinguished by the context in which they occur. The context is based on the values of the following variables:

Table 3-9 Outgoing Events

Event 

Description 

Service Type 

opened

Successful return of t_open(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

bind

Successful return of t_bind(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

optmgmt

Successful return of t_optmgmt(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

unbind

Successful return of t_unbind(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

closed

Successful return of t_close(3NSL)

T_COTS, T_COTS_ORD, T_CLT

connect1

Successful return of t_connect(3NSL) in synchronous mode

T_COTS, T_COTS_ORD

connect2

TNODATA error on t_connect(3NSL) in asynchronous mode, or TLOOK error due to a disconnect request arriving on the transport endpoint

T_COTS, T_COTS_ORD

accept1

Successful return of t_accept(3NSL) with ocnt == 1, fd == resfd

T_COTS, T_COTS_ORD

accept2

Successful return of t_accept(3NSL) with ocnt== 1, fd!= resfd

T_COTS, T_COTS_ORD

accept3

Successful return of t_accept(3NSL) with ocnt > 1

T_COTS, T_COTS_ORD

snd

Successful return of t_snd(3NSL)

T_COTS, T_COTS_ORD

snddis1

Successful return of t_snddis(3NSL) with

ocnt <= 1

T_COTS, T_COTS_ORD

snddis2

Successful return of t_snddis(3NSL) with ocnt > 1

T_COTS, T_COTS_ORD

sndrel

Successful return of t_sndrel(3NSL)

T_COTS_ORD

sndudata

Successful return of t_sndudata(3NSL)

T_CLTS

Incoming Events

The incoming events correspond to the successful return of the specified routines. These routines return data or event information from the transport provider. The only incoming event not associated directly with the return of a routine is pass_conn, which occurs when a connection is transferred to another endpoint. The event occurs on the endpoint that is being passed the connection, although no XTI/TLI routine is called on the endpoint.

In Table 3-10, the rcvdis events are distinguished by the value of ocnt, the count of outstanding connect requests on the endpoint.

Table 3-10 Incoming Events

Event 

Description 

Service Type 

listen

Successful return of t_listen(3NSL)

T_COTS, T_COTS_ORD

rcvconnect

Successful return of t_rcvconnect(3NSL)

T_COTS, T_COTS_ORD

rcv

Successful return of t_rcv(3NSL)

T_COTS, T_COTS_ORD

rcvdis1

Successful return of t_rcvdis(3NSL)rcvdis1t_rcvdis(), onct <= 0

T_COTS, T_COTS_ORD

rcvdis2

Successful return of t_rcvdis(3NSL), ocnt == 1

T_COTS, T_COTS_ORD

rcvdis3

Successful return of t_rcvdis(3NSL) with ocnt > 1

T_COTS, T_COTS_ORD

rcvrel

Successful return of t_rcvrel(3NSL)

T_COTS_ORD

rcvudata

Successful return of t_rcvudata(3NSL)

T_CLTS

rcvuderr

Successful return of t_rcvuderr(3NSL)

T_CLTS

pass_conn

Receive a passed connection 

T_COTS, T_COTS_ORD

Transport User Actions

Some state transitions (below) have a list of actions the transport user must take. Each action is represented by a digit from the list below:

State Tables

The tables describe the XTI/TLI state transitions. Each box contains the next state, given the current state (column) and the current event (row). An empty box is an invalid state/event combination. Each box can also have an action list. Actions must be done in the order specified in the box.

The following should be understood when studying the state tables:

Table 3-11, Table 3-12, Table 3-13, and Table 3-14 show endpoint establishment, data transfer in connectionless mode, and connection establishment/connection release/data transfer in connection mode.

Table 3-11 Connection Establishment State

Event/State 

T_UNINIT 

T_UNBND 

T_IDLE 

opened

T_UNBND 

 

 

bind

 

T_IDLE[1] 

 

optmgmt (TLI only)

 

 

T_IDLE 

unbind

 

 

T_UNBND 

closed

 

T_UNINIT 

 

Table 3-12 Connection Mode State--Part 1

Event/State 

T_IDLE 

T_OUTCON 

T_INCON 

T_DATAXFER 

connect1

T_DATAXFER 

 

 

 

connect2

T_OUTCON 

 

 

 

rcvconnect

 

T_DATAXFER 

 

 

listen

T_INCON [2] 

 

T_INCON [2] 

 

accept1

 

 

T_DATAXFER [3] 

 

accept2

 

 

T_IDLE [3] [4] 

 

accept3

 

 

T_INCON [3] [4] 

 

snd

 

 

 

T_DATAXFER 

rcv

 

 

 

T_DATAXFER 

snddis1

 

T_IDLE 

T_IDLE [3] 

T_IDLE 

snddis2

 

 

T_INCON [3] 

 

rcvdis1

 

T_IDLE 

 

T_IDLE 

rcvdis2

 

 

T_IDLE [3] 

 

rcvdis3

 

 

T_INCON [3] 

 

sndrel

 

 

 

T_OUTREL 

rcvrel

 

 

 

T_INREL 

pass_conn

T_DATAXFER 

 

 

 

optmgmt

T_IDLE 

T_OUTCON 

T_INCON 

T_DATAXFER 

closed

T_UNINIT 

T_UNINIT 

T_UNINIT 

T_UNINIT 

Table 3-13 Connection Mode State--Part 2

Event/State 

T_OUTREL 

T_INREL 

T_UNBND 

connect1

 

 

 

connect2

 

 

 

rcvconnect

 

 

 

listen

 

 

 

accept1

 

 

 

accept2

 

 

 

accept3

 

 

 

snd

 

T_INREL 

 

rcv

T_OUTREL 

 

 

snddis1

T_IDLE 

T_IDLE 

 

snddis2

 

 

 

rcvdis1

T_IDLE 

T_IDLE 

 

rcvdis2

 

 

 

rcvdis3

 

 

 

sndrel

 

T_IDLE 

 

rcvrel

T_IDLE 

 

 

pass_conn

 

 

T_DATAXFER

optmgmt

T_OUTREL 

T_INREL 

T_UNBND 

closed

T_UNINIT 

T_UNINIT 

 

Table 3-14 Connectionless Mode State

Event/State 

T_IDLE 

snudata

T_IDLE 

rcvdata

T_IDLE 

rcvuderr

T_IDLE 

Guidelines to Protocol Independence

XTI/TLI's set of services, common to many transport protocols, offers protocol independence to applications. Not all transport protocols support all XTI/TLI services. If software must run in a variety of protocol environments, use only the common services. The following is a list of services that might not be common to all transport protocols.

  1. In connection mode service, a transport service data unit (TSDU) might not be supported by all transport providers. Make no assumptions about preserving logical data boundaries across a connection.

  2. Protocol and implementation specific service limits are returned by the t_open(3NSL) and t_getinfo(3NSL) routines. Use these limits to allocate buffers to store protocol-specific transport addresses and options.

  3. Do not send user data with connect requests or disconnect requests, such as t_connect(3NSL) and t_snddis(3NSL). Not all transport protocols work this way.

  4. The buffers in the t_call structure used for t_listen(3NSL) must be large enough to hold any data sent by the client during connection establishment. Use the T_ALL argument to t_alloc(3NSL) to set maximum buffer sizes to store the address, options, and user data for the current transport provider.

  5. Do not specify a protocol address on t_bind(3NSL) on a client side endpoint. Let the transport provider assign an appropriate address to the transport endpoint. A server should retrieve its address for t_bind(3NSL) in such a way that it does not require knowledge of the transport provider's name space.

  6. Do not make assumptions about formats of transport addresses. Transport addresses should not be constants in a program. Chapter 4, Transport Selection and Name-to-Address Mapping contains detailed information.

  7. The reason codes associated with t_rcvdis(3NSL) are protocol-dependent. Do not interpret this information if protocol independence is important.

  8. The t_rcvuderr(3NSL) error codes are protocol dependent. Do not interpret this information if protocol independence is a concern.

  9. Do not code the names of devices into programs. The device node identifies a particular transport provider and is not protocol independent. See Chapter 4, Transport Selection and Name-to-Address Mapping for details.

  10. Do not use the optional orderly release facility of the connection mode service--provided by t_sndrel(3NSL) and t_rcvrel(3NSL)--in programs targeted for multiple protocol environments. This facility is not supported by all connection-based transport protocols. Its use can prevent programs from successfully communicating with open systems.

XTI/TLI Versus Socket Interfaces

XTI/TLI and sockets are different methods of handling the same tasks. Mostly, they provide mechanisms and services that are functionally similar. They do not provide one-to-one compatibility of routines or low-level services. Observe the similarities and differences between the XTI/TLI and socket-based interfaces before you decide to port an application.

The following issues are related to transport independence, and can have some bearing on RPC applications:

Socket-to-XTI/TLI Equivalents

Table 3-15 shows approximate equivalents between XTI/TLI functions and socket functions. The comment field describes the differences. If there is no comment, either the functions are similar or there is no equivalent function in either interface.

Table 3-15 TLI and Socket Equivalent Functions

TLI function 

Socket function 

Comments 

t_open(3NSL)

socket(3SOCKET)

 

-

socketpair(3SOCKET)

 

t_bind(3NSL)

bind(3SOCKET)

t_bind(3NSL) sets the queue depth for passive sockets, but bind(3SOCKET) doesn't. For sockets, the queue length is specified in the call to listen(3SOCKET).

t_optmgmt(3NSL)

getsockopt(3SOCKET)

setsockopt(3SOCKET)

t_optmgmt(3NSL) manages only transport options. getsockopt(3SOCKET) and setsockopt(3SOCKET) can manage options at the transport layer, but also at the socket layer and at the arbitrary protocol layer.

t_unbind(3NSL)

 

-

t_close(3NSL)

close(2)

 

t_getinfo(3NSL)

getsockopt(3SOCKET)

t_getinfo(3NSL) returns information about the transport. getsockopt(3SOCKET) can return information about the transport and the socket.

t_getstate(3NSL)

-

 

t_sync(3NSL)

-

 

t_alloc(3NSL)

-

 

t_free(3NSL)

-

 

t_look(3NSL)

-

getsockopt(3SOCKET) with the SO_ERROR option returns the same kind of error information as t_look(3NSL)t_look().

t_error(3NSL)

perror(3C)

 

t_connect(3NSL)

connect(3SOCKET)

A connect(3SOCKET) can be done without first binding the local endpoint. The endpoint must be bound before calling t_connect(3NSL). A connect(3SOCKET) can be done on a connectionless endpoint to set the default destination address for datagrams. Data can be sent on a connect(3SOCKET).

t_rcvconnect(3NSL)

-

 

t_listen(3NSL)

listen(3SOCKET)

t_listen(3NSL) waits for connection indications. listen(3SOCKET) merely sets the queue depth.

t_accept(3NSL)

accept(3SOCKET)

 

t_snd(3NSL)

send(3SOCKET)

 

 

sendto(3SOCKET)

 

 

sendmsg(3SOCKET)

sendto(3SOCKET) and sendmsg(3SOCKET) operate in connection mode as well as datagram mode.

t_rcv(3NSL)

recv(3SOCKET)

 

 

recvfrom(3SOCKET)

 

 

recvmsg(3SOCKET)

recvfrom(3SOCKET) and recvmsg(3SOCKET) operate in connection mode as well as datagram mode.

t_snddis(3NSL)

-

 

t_rcvdis(3NSL)

-

 

t_sndrel(3NSL)

shutdown(3SOCKET)

 

t_rcvrel(3NSL)

-

 

t_sndudata(3NSL)

sendto(3SOCKET)

 

 

recvmsg(3SOCKET)

 

t_rcvuderr(3NSL)

-

 

read(2), write(2)

read(2), write(2)

In XTI/TLI you must push the tirdwr(7M) module before calling read(2) or write(2); in sockets, just call read(2) or write(2).

Additions to XTI Interface

The XNS 5 (Unix98) standard introduces some new XTI interfaces. These are briefly described below. The details may be found in the relevant manual pages. These interfaces are not available for TLI users.

Scatter/Gather Data Transfer Interfaces

t_sndvudata(3NSL)

Send a data unit from one or more non-contiguous buffers

t_rcvvudata(3NSL)

Receive a data unit into one or more non-contiguous buffers

t_sndv(3NSL)

Send data or expedited data from one or more non-contiguous buffers on a connection

t_rcvv(3NSL)

Receive data or expedited data sent over a connection and put the data into one or more non-contiguous buffers

XTI Utility Functions

t_sysconf(3NSL)

Get configurable XTI variables

Additional Connection Release Interfaces

t_sndreldata(3NSL)

Initiate/respond to an orderly release with user data

t_rcvreldata(3NSL)

Receive an orderly release indication or confirmation containing user data


Note -

The additional interfacest_sndreldata(3NSL) and t_rcvreldata(3NSL) are only for use with a specific transport called "minimal OSI", which is not available on the Solaris platform. These interfaces are not available for use in conjunction with Internet Transports (TCP or UDP).