Programming Interfaces Guide

Chapter 9 Programming With XTI and TLI

This chapter describes the Transport Layer Interface (TLI) and the X/Open Transport Interface (XTI). Advanced topics such as asynchronous execution mode are discussed in Advanced XTI/TLI Topics.

Some recent additions to XTI, such as scatter/gather data transfer, are discussed in Additions to the XTI Interface.

The transport layer of the OSI model (layer 4) is the lowest layer of the model that provides applications and higher layers with end-to-end service. This layer hides the topology and characteristics of the underlying network from users. The transport layer also defines a set of services common to many contemporary protocol suites including the OSI protocols, Transmission Control Protocol and TCP/IP Internet Protocol Suite, Xerox Network Systems (XNS), and Systems Network Architecture (SNA).

TLI s modeled on the industry standard Transport Service Definition (ISO 8072). It also can be used to access both TCP and UDP. XTI and TLI are a set of interfaces that constitute a network programming interface. XTI is an evolution from the older TLI interface available on the SunOS 4 platform. The Solaris operating system supports both interfaces, although XTI represents the future direction of this set of interfaces. The Solaris software implements XTI and TLI as a user library using the STREAMS I/O mechanism.

What Are XTI and TLI?


Note –

The interfaces described in this chapter are multithread safe. This means that applications containing XTI/TLI interface calls can be used freely in a multithreaded application. Because these interface calls are not re-entrant, they do not provide linear scalability.



Caution – Caution –

The XTI/TLI interface behavior has not been well specified in an asynchronous environment. Do not use these interfaces from signal handler routines.


TLI was introduced with AT&T System V, Release 3 in 1986. TLI provided a transport layer interface API. The ISO Transport Service Definition provided the model on which TLI is based. TLI provides an API between the OSI transport and session layers. TLI interfaces evolved further in AT&T System V, Release 4 version of UNIX and were also made available in SunOS 5.6 operating system interfaces.

XTI interfaces are an evolution of TLI interfaces and represent the future direction of this family of interfaces. Compatibility for applications using TLI interfaces is available. You do not need to port TLI applications to XTI immediately. New applications can use the XTI interfaces and you can port older applications to XTI when necessary.

TLI is implemented as a set of interface calls in a library (libnsl) to which the applications link. XTI applications are compiled using the c89 front end and must be linked with the xnet library (libxnet). For additional information on compiling with XTI, see the standards(5) man page.


Note –

An application using the XTI interface uses the xti.h header file, whereas an application using the TLI interface includes the tiuser.h header file.


XTI/TLI code can be independent of current transport providers when used in conjunction with some additional interfaces and mechanisms described in Chapter 4. The SunOS 5 product includes some transport providers (TCP, for example) as part of the base operating system. A transport provider performs services, and the transport user requests the services. The transport user issues service requests to the transport provider. An example is a request to transfer data over a connection TCP and UDP.

XTI/TLI can also be used for transport-independent programming by taking advantage of two components:

XTI/TLI Read/Write Interface

A user might want to establish a transport connection using exec(2) on an existing program (such as /usr/bin/cat) to process the data as it arrives over the connection. Existing programs use read(2) and write(2). XTI/TLI does not directly support a read/write interface to a transport provider, but one is available. The interface enables you to issue read(2) and write(2) calls over a transport connection in the data transfer phase. This section describes the read/write interface to the connection mode service of XTI/TLI. This interface is not available with the connectionless mode service.


Example 9–1 Read/Write Interface

#include <stropts.h>

/* Same local management and connection establishment steps. */

if (ioctl(fd, I_PUSH, "tirdwr") == -1) {
    perror(“I_PUSH of tirdwr failed”);
    exit(5);
}
close(0);
dup(fd);
execl(“/usr/bin/cat”, “/usr/bin/cat”, (char *) 0);
perror(“exec of /usr/bin/cat failed”);
exit(6);

The client invokes the read/write interface by pushing tirdwr onto the stream associated with the transport endpoint. See the description of I_PUSH in the streamio(7I) man page. The tirdwr module converts XTI/TLI above the transport provider into a pure read/write interface. With the module in place, the client calls close(2) and dup(2) to establish the transport endpoint as its standard input file, and uses /usr/bin/cat to process the input.

Pushing tirdwr onto the transport provider forces XTI/TLI to use read(2) and write(2) semantics. XTI/TLI does not preserve message boundaries when using read and write semantics. Pop tirdwr from the transport provider to restore XTI/TLI semantics (see the description of I_POP in the streamio(7I) man page.


Caution – Caution –

Push the tirdwr module onto a stream only when the transport endpoint is in the data transfer phase. After pushing the module, the user cannot call any XTI/TLI routines. If the user invokes an XTI/TLI routine, tirdwr generates a fatal protocol error, EPROTO, on the stream, rendering it unusable. If you then pop the tirdwr module off the stream, the transport connection aborts. See the description of I_POP in the streamio(7I) man page.


Write Data

After you send data over the transport connection with write(2), tirdwr passes data through to the transport provider. If you send a zero-length data packet, which the mechanism allows, tirdwr discards the message. If the transport connection is aborted, a hang-up condition is generated on the stream, further write(2) calls fail, and errno is set to ENXIO. This problem might occur, for example, because the remote user aborts the connection using t_snddis(3NSL). You can still retrieve any available data after a hang-up.

Read Data

Receive data that arrives at the transport connection with read(2). tirdwr passes data from the transport provider. The tirdwr module processes any other event or request passed to the user from the provider as follows:

Close Connection

With tirdwr on a stream, you can send and receive data over a transport connection for the duration of the connection. Either user can terminate the connection by closing the file descriptor associated with the transport endpoint or by popping the tirdwr module off the stream. In either case, tirdwr does the following:

A process cannot initiate an orderly release after pushing tirdwr onto a stream. tirdwr handles an orderly release if the user on the other side of a transport connection initiates the release. If the client in this section is communicating with a server program, the server terminates the transfer of data with an orderly release request. The server then waits for the corresponding request from the client. At that point, the client exits and closes the transport endpoint. After closing the file descriptor, tirdwr initiates the orderly release request from the client's side of the connection. This release generates the request on which the server blocks.

Some protocols, like TCP, require this orderly release to ensure intact delivery of the data.

Advanced XTI/TLI Topics

This section presents additional XTI/TLI concepts:

Asynchronous Execution Mode

Many XTI/TLI library routines block to wait for an incoming event. However, some time-critical applications should not block for any reason. An application can do local processing while waiting for some asynchronous XTI/TLI event.

Applications can access asynchronous processing of XTI/TLI events through the combination of asynchronous features and the non-blocking mode of XTI/TLI library routines. See the ONC+ Developer’s Guide for information on use of the poll(2) system call and the I_SETSIG ioctl(2) command to process events asynchronously.

You can run each XTI/TLI routine that blocks for an event in a special non-blocking mode. For example, t_listen(3NSL) normally blocks for a connect request. A server can periodically poll a transport endpoint for queued connect requests by calling t_listen(3NSL) in the non-blocking (or asynchronous) mode. You enable the asynchronous mode by setting O_NDELAY or O_NONBLOCK in the file descriptor. Set these modes as a flag through t_open(3NSL), or by calling fcntl(2) before calling the XTI/TLI routine. Use fcntl(2) to enable or disable this mode at any time. All program examples in this chapter use the default synchronous processing mode.

Use of O_NDELAY or O_NONBLOCK affects each XTI/TLI routine differently. You need to determine the exact semantics of O_NDELAY or O_NONBLOCK for a particular routine.

Advanced XTI/TLI Programming Example

Example 9–2 demonstrates two important concepts. The first is a server's ability to manage multiple outstanding connect requests. The second is event-driven use of XTI/TLI and the system call interface.

By using XTI/TLI, a server can manage multiple outstanding connect requests. One reason to receive several simultaneous connect requests is to prioritize the clients. A server can receive several connect requests, and accept them in an order based on the priority of each client.

The second reason for handling several outstanding connect requests is to overcome the limits of single-threaded processing. Depending on the transport provider, while a server is processing one connect request, other clients see the server as busy. If multiple connect requests are processed simultaneously, the server is busy only if more than the maximum number of clients try to call the server simultaneously.

The server example is event-driven: the process polls a transport endpoint for incoming XTI/TLI events and takes the appropriate actions for the event received. The example following demonstrates the ability to poll multiple transport endpoints for incoming events.


Example 9–2 Endpoint Establishment (Convertible to Multiple Connections)

#include <tiuser.h>
#include <fcntl.h>
#include <stdio.h>
#include <poll.h>
#include <stropts.h>
#include <signal.h>

#define NUM_FDS 1
#define MAX_CONN_IND 4
#define SRV_ADDR 1                 /* server's well known address */

int conn_fd;                       /* server connection here */
extern int t_errno;
/* holds connect requests */
struct t_call *calls[NUM_FDS][MAX_CONN_IND];

main()
{
   struct pollfd pollfds[NUM_FDS];
   struct t_bind *bind;
   int i;

   /*
    * Only opening and binding one transport endpoint, but more can
    * be supported
    */
   if ((pollfds[0].fd = t_open(“/dev/tivc”, O_RDWR,
         (struct t_info *) NULL)) == -1) {
      t_error(“t_open failed”);
      exit(1);
   }
   if ((bind = (struct t_bind *) t_alloc(pollfds[0].fd, T_BIND,
         T_ALL)) == (struct t_bind *) NULL) {
      t_error(“t_alloc of t_bind structure failed”);
      exit(2);
   }
   bind->qlen = MAX_CONN_IND;
   bind->addr.len = sizeof(int);
   *(int *) bind->addr.buf = SRV_ADDR;
   if (t_bind(pollfds[0].fd, bind, bind) == -1) {
      t_error(“t_bind failed”);
      exit(3);
   }
   /* Was the correct address bound? */
   if (bind->addr.len != sizeof(int) ||
      *(int *)bind->addr.buf != SRV_ADDR) {
      fprintf(stderr, “t_bind bound wrong address\n”);
      exit(4);
   }
}

The file descriptor returned by t_open(3NSL) is stored in a pollfd structure that controls polling of the transport endpoints for incoming data. See the poll(2) man page. Only one transport endpoint is established in this example. However, the remainder of the example is written to manage multiple transport endpoints. Several endpoints could be supported with minor changes to Example 9–2.

This server sets qlen to a value greater than 1 for t_bind(3NSL). This value specifies that the server should queue multiple outstanding connect requests. The server accepts the current connect request before accepting additional connect requests. This example can queue up to MAX_CONN_IND connect requests. The transport provider can negotiate the value of qlen to be smaller if the provider cannot support MAX_CONN_IND outstanding connect requests.

After the server binds its address and is ready to process connect requests, it behaves as shown in the following example.


Example 9–3 Processing Connection Requests

pollfds[0].events = POLLIN;

while (TRUE) {
    if (poll(pollfds, NUM_FDS, -1) == -1) {
        perror(“poll failed”);
        exit(5);
    }
    for (i = 0; i < NUM_FDS; i++) {
        switch (pollfds[i].revents) {
            default:
                perror(“poll returned error event”);
                exit(6);
            case 0:
                continue;
            case POLLIN:
                do_event(i, pollfds[i].fd);
                service_conn_ind(i, pollfds[i].fd);
        }
    }
}

The events field of the pollfd structure is set to POLLIN, which notifies the server of any incoming XTI/TLI events. The server then enters an infinite loop in which it polls the transport endpoints for events, and processes events as they occur.

The poll(2) call blocks indefinitely for an incoming event. On return, the server checks the value of revents for each entry, one per transport endpoint, for new events. If revents is 0, the endpoint has generated no events and the server continues to the next endpoint. If revents is POLLIN, there is an event on the endpoint. The server calls do_event to process the event. Any other value in revents indicates an error on the endpoint, and the server exits. With multiple endpoints, the server should close this descriptor and continue.

Each time the server iterates the loop, it calls service_conn_ind to process any outstanding connect requests. If another connect request is pending, service_conn_ind saves the new connect request and responds to it later.

The server calls do_event in the following example to process an incoming event.


Example 9–4 Event Processing Routine

do_event( slot, fd)
int slot;
int fd;
{
   struct t_discon *discon;
   int i;

   switch (t_look(fd)) {
   default:
      fprintf(stderr, "t_look: unexpected event\n");
      exit(7);
   case T_ERROR:
      fprintf(stderr, "t_look returned T_ERROR event\n");
      exit(8);
   case -1:
      t_error("t_look failed");
      exit(9);
   case 0:
      /* since POLLIN returned, this should not happen */
      fprintf(stderr,"t_look returned no event\n");
      exit(10);
   case T_LISTEN:
      /* find free element in calls array */
      for (i = 0; i < MAX_CONN_IND; i++) {
         if (calls[slot][i] == (struct t_call *) NULL)
            break;
      }
      if ((calls[slot][i] = (struct t_call *) t_alloc( fd, T_CALL,
               T_ALL)) == (struct t_call *) NULL) {
         t_error("t_alloc of t_call structure failed");
         exit(11);
      }
      if (t_listen(fd, calls[slot][i] ) == -1) {
         t_error("t_listen failed");
         exit(12);
      }
      break;
   case T_DISCONNECT:
      discon = (struct t_discon *) t_alloc(fd, T_DIS, T_ALL);
      if (discon == (struct t_discon *) NULL) {
         t_error("t_alloc of t_discon structure failed");
         exit(13)
      }
      if(t_rcvdis( fd, discon) == -1) {
         t_error("t_rcvdis failed");
         exit(14);
      }
      /* find call ind in array and delete it */
      for (i = 0; i < MAX_CONN_IND; i++) {
         if (discon->sequence == calls[slot][i]->sequence) {
            t_free(calls[slot][i], T_CALL);
            calls[slot][i] = (struct t_call *) NULL;
         }
      }
      t_free(discon, T_DIS);
      break;
   }
}

The arguments in Example 9–4 are a number (slot) and a file descriptor (fd). A slot is the index into the global array calls, which has an entry for each transport endpoint. Each entry is an array of t_call structures that hold incoming connect requests for the endpoint.

The do_event module calls t_look(3NSL) to identify the XTI/TLI event on the endpoint specified by fd. If the event is a connect request (T_LISTEN event) or disconnect request (T_DISCONNECT event), the event is processed. Otherwise, the server prints an error message and exits.

For connect requests, do_event scans the array of outstanding connect requests for the first free entry. A t_call structure is allocated for the entry, and the connect request is received by t_listen(3NSL). The array is large enough to hold the maximum number of outstanding connect requests. The processing of the connect request is deferred.

A disconnect request must correspond to an earlier connect request. The do_event module allocates a t_discon structure to receive the request. This structure has the following fields:

struct t_discon {
    struct    netbuf    udata;
    int       reason;
    int       sequence;
}

The udata structure contains any user data sent with the disconnect request. The value of reason contains a protocol-specific disconnect reason code. The value of sequence identifies the connect request that matches the disconnect request.

The server calls t_rcvdis(3NSL) to receive the disconnect request. The array of connect requests is scanned for one that contains the sequence number that matches the sequence number in the disconnect request. When the connect request is found, its structure is freed and the entry is set to NULL.

When an event is found on a transport endpoint, service_conn_ind is called to process all queued connect requests on the endpoint, as the following example shows.


Example 9–5 Process All Connect Requests

service_conn_ind(slot, fd)
{
   int i;

	for (i = 0; i < MAX_CONN_IND; i++) {
      if (calls[slot][i] == (struct t_call *) NULL)
         continue;
      if((conn_fd = t_open( “/dev/tivc”, O_RDWR,
            (struct t_info *) NULL)) == -1) {
         t_error("open failed");
         exit(15);
      }
      if (t_bind(conn_fd, (struct t_bind *) NULL,
            (struct t_bind *) NULL) == -1) {
         t_error("t_bind failed");
         exit(16);
      }
      if (t_accept(fd, conn_fd, calls[slot][i]) == -1) {
         if (t_errno == TLOOK) {
            t_close(conn_fd);
            return;
         }
         t_error("t_accept failed");
         exit(167);
      }
      t_free(calls[slot][i], T_CALL);
      calls[slot][i] = (struct t_call *) NULL;
      run_server(fd);
   }
}

For each transport endpoint, the array of outstanding connect requests is scanned. For each request, the server opens a responding transport endpoint, binds an address to the endpoint, and accepts the connection on the endpoint. If another connect or disconnect request arrives before the current request is accepted, t_accept(3NSL) fails and sets t_errno to TLOOK. You cannot accept an outstanding connect request if any pending connect request events or disconnect request events exist on the transport endpoint.

If this error occurs, the responding transport endpoint is closed and service_conn_ind returns immediately, saving the current connect request for later processing. This activity causes the server's main processing loop to be entered, and the new event is discovered by the next call to poll(2). In this way, the user can queue multiple connect requests.

Eventually, all events are processed, and service_conn_ind is able to accept each connect request in turn.

Asynchronous Networking

This section discusses the techniques of asynchronous network communication using XTI/TLI for real-time applications. The SunOS platform provides support for asynchronous network processing of XTI/TLI events using a combination of STREAMS asynchronous features and the non-blocking mode of the XTI/TLI library routines.

Networking Programming Models

Like file and device I/O, network transfers can be done synchronously or asynchronously with process service requests.

Synchronous networking proceeds similar to synchronous file and device I/O. Like the write(2) interface, the send request returns after buffering the message, but might suspend the calling process if buffer space is not immediately available. Like the read(2) interface, a receive request suspends execution of the calling process until data arrives to satisfy the request. Because there are no guaranteed bounds for transport services, synchronous networking is inappropriate for processes that must have real-time behavior with respect to other devices.

Asynchronous networking is provided by non-blocking service requests. Additionally, applications can request asynchronous notification when a connection might be established, when data might be sent, or when data might be received.

Asynchronous Connectionless-Mode Service

Asynchronous connectionless mode networking is conducted by configuring the endpoint for non-blocking service, and either polling for or receiving asynchronous notification when data might be transferred. If asynchronous notification is used, the actual receipt of data typically takes place within a signal handler.

Making the Endpoint Asynchronous

After the endpoint has been established using t_open(3NSL), and its identity established using t_bind(3NSL), the endpoint can be configured for asynchronous service. Use the fcntl(2) interface to set the O_NONBLOCK flag on the endpoint. Thereafter, calls to t_sndudata(3NSL) for which no buffer space is immediately available return -1 with t_errno set to TFLOW. Likewise, calls to t_rcvudata(3NSL) for which no data are available return -1 with t_errno set to TNODATA.

Asynchronous Network Transfers

Although an application can use poll(2) to check periodically for the arrival of data or to wait for the receipt of data on an endpoint, receiving asynchronous notification when data arrives might be necessary. Use ioctl(2) with the I_SETSIG command to request that a SIGPOLL signal be sent to the process upon receipt of data at the endpoint. Applications should check for the possibility of multiple messages causing a single signal.

In the following example, protocol is the name of the application-chosen transport protocol.

#include <sys/types.h>
#include <tiuser.h>
#include <signal.h>
#include <stropts.h>

int              fd;
struct t_bind    *bind;
void             sigpoll(int);

	fd = t_open(protocol, O_RDWR, (struct t_info *) NULL);

	bind = (struct t_bind *) t_alloc(fd, T_BIND, T_ADDR);
	...     /* set up binding address */
	t_bind(fd, bind, bin

	/* make endpoint non-blocking */
	fcntl(fd, F_SETFL, fcntl(fd, F_GETFL) | O_NONBLOCK);

	/* establish signal handler for SIGPOLL */
	signal(SIGPOLL, sigpoll);

	/* request SIGPOLL signal when receive data is available */
	ioctl(fd, I_SETSIG, S_INPUT | S_HIPRI);

	...

void sigpoll(int sig)
{
	int                  flags;
	struct t_unitdata    ud;

	for (;;) {
		... /* initialize ud */
		if (t_rcvudata(fd, &ud, &flags) < 0) {
			if (t_errno == TNODATA)
				break;  /* no more messages */
			... /* process other error conditions */
	}
	... /* process message in ud */
}

Asynchronous Connection-Mode Service

For connection-mode service, an application can arrange not only for the data transfer, but also for the establishment of the connection itself to be done asynchronously. The sequence of operations depends on whether the process is attempting to connect to another process or is awaiting connection attempts.

Asynchronously Establishing a Connection

A process can attempt a connection and asynchronously complete the connection. The process first creates the connecting endpoint and, using fcntl(2), configures the endpoint for non-blocking operation. As with connectionless data transfers, the endpoint can also be configured for asynchronous notification upon completion of the connection and subsequent data transfers. The connecting process then uses t_connect(3NSL) to initiate setting up the transfer. Then t_rcvconnect(3NSL) is used to confirm the establishment of the connection.

Asynchronous Use of a Connection

To asynchronously await connections, a process first establishes a non-blocking endpoint bound to a service address. When either the result of poll(2) or an asynchronous notification indicates that a connection request has arrived, the process can get the connection request by using t_listen(3NSL). To accept the connection, the process uses t_accept(3NSL) . The responding endpoint must be separately configured for asynchronous data transfers.

The following example illustrates how to request a connection asynchronously.

#include <tiuser.h>
int             fd;
struct t_call   *call;

fd = /* establish a non-blocking endpoint */

call = (struct t_call *) t_alloc(fd, T_CALL, T_ADDR);
/* initialize call structure */
t_connect(fd, call, call);

/* connection request is now proceeding asynchronously */

/* receive indication that connection has been accepted */
t_rcvconnect(fd, &call);

The following example illustrates listening for connections asynchronously.

#include <tiuser.h>
int             fd, res_fd;
struct t_call   call;

fd = /* establish non-blocking endpoint */

/*receive indication that connection request has arrived */
call = (struct t_call *) t_alloc(fd, T_CALL, T_ALL);
t_listen(fd, &call);

/* determine whether or not to accept connection */
res_fd = /* establish non-blocking endpoint for response */
t_accept(fd, res_fd, call);

Asynchronous Open

Occasionally, an application might be required to dynamically open a regular file in a file system mounted from a remote host, or on a device whose initialization might be prolonged. However, while such a request to open a file is being processed, the application is unable to achieve real-time response to other events. The SunOS software solves this problem by having a second process handle the actual opening of the file, then passes the file descriptor to the real-time process.

Transferring a File Descriptor

The STREAMS interface provided by the SunOS platform provides a mechanism for passing an open file descriptor from one process to another. The process with the open file descriptor uses ioctl(2) with a command argument of I_SENDFD. The second process obtains the file descriptor by calling ioctl(2) with a command argument of I_RECVFD.

In the following example, the parent process prints out information about the test file, and creates a pipe. Next, the parent creates a child process that opens the test file and passes the open file descriptor back to the parent through the pipe. The parent process then displays the status information on the new file descriptor.


Example 9–6 File Descriptor Transfer

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stropts.h>
#include <stdio.h>

#define TESTFILE "/dev/null"
main(int argc, char *argv[])
{
	int fd;
	int pipefd[2];
	struct stat statbuf;

	stat(TESTFILE, &statbuf);
	statout(TESTFILE, &statbuf);
	pipe(pipefd);
	if (fork() == 0) {
		close(pipefd[0]);
		sendfd(pipefd[1]);
	} else {
		close(pipefd[1])
		recvfd(pipefd[0]);
	}
}

sendfd(int p)
{
	int tfd;

	tfd = open(TESTFILE, O_RDWR);
	ioctl(p, I_SENDFD, tfd);
}

recvfd(int p)
{
	struct strrecvfd rfdbuf;
	struct stat statbuf;
	char			fdbuf[32];

	ioctl(p, I_RECVFD, &rfdbuf);
	fstat(rfdbuf.fd, &statbuf);
	sprintf(fdbuf, "recvfd=%d", rfdbuf.fd);
	statout(fdbuf, &statbuf);	
}

statout(char *f, struct stat *s)
{
	printf("stat: from=%s mode=0%o, ino=%ld, dev=%lx, rdev=%lx\n",
		f, s->st_mode, s->st_ino, s->st_dev, s->st_rdev);
	fflush(stdout);
}

State Transitions

The tables in the following sections describe all state transitions associated with XTI/TLI.

XTI/TLI States

The following table defines the states used in XTI/TLI state transitions, along with the service types.

Table 9–1 XTI/TLI State Transitions and Service Types

State 

Description 

Service Type 

T_UNINIT

Uninitialized–initial and final state of interface 

T_COTS, T_COTS_ORD, T_CLTS

T_UNBND

Initialized but not bound 

T_COTS, T_COTS_ORD, T_CLTS

T_IDLE

No connection established 

T_COTS, T_COTS_ORD, T_CLTS

T_OUTCON

Outgoing connection pending for client 

T_COTS, T_COTS_ORD

T_INCON

Incoming connection pending for server 

T_COTS, T_COTS_ORD

T_DATAXFER

Data transfer 

T_COTS, T_COTS_ORD

T_OUTREL

Outgoing orderly release (waiting for orderly release request) 

T_COTS_ORD

T_INREL

Incoming orderly release (waiting to send orderly release request) 

T_COTS_ORD

Outgoing Events

The outgoing events described in the following table correspond to the status returned from the specified transport routines, where these routines send a request or response to the transport provider. In the table, some events, such as “accept,” are distinguished by the context in which they occur. The context is based on the values of the following variables:

Table 9–2 Outgoing Events

Event 

Description 

Service Type 

opened

Successful return of t_open(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

bind

Successful return of t_bind(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

optmgmt

Successful return of t_optmgmt(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

unbind

Successful return of t_unbind(3NSL)

T_COTS, T_COTS_ORD, T_CLTS

closed

Successful return of t_close(3NSL)

T_COTS, T_COTS_ORD, T_CLT

connect1

Successful return of t_connect(3NSL) in synchronous mode

T_COTS, T_COTS_ORD

connect2

TNODATA error on t_connect(3NSL) in asynchronous mode, or TLOOK error due to a disconnect request arriving on the transport endpoint

T_COTS, T_COTS_ORD

accept1

Successful return of t_accept(3NSL) with ocnt == 1, fd == resfd

T_COTS, T_COTS_ORD

accept2

Successful return of t_accept(3NSL) with ocnt== 1, fd!= resfd

T_COTS, T_COTS_ORD

accept3

Successful return of t_accept(3NSL) with ocnt > 1

T_COTS, T_COTS_ORD

snd

Successful return of t_snd(3NSL)

T_COTS, T_COTS_ORD

snddis1

Successful return of t_snddis(3NSL) with ocnt <= 1

T_COTS, T_COTS_ORD

snddis2

Successful return of t_snddis(3NSL) with ocnt > 1

T_COTS, T_COTS_ORD

sndrel

Successful return of t_sndrel(3NSL)

T_COTS_ORD

sndudata

Successful return of t_sndudata(3NSL)

T_CLTS

Incoming Events

The incoming events correspond to the successful return of the specified routines. These routines return data or event information from the transport provider. The only incoming event not associated directly with the return of a routine is pass_conn, which occurs when a connection is transferred to another endpoint. The event occurs on the endpoint that is being passed the connection, although no XTI/TLI routine is called on the endpoint.

In the following table, the rcvdis events are distinguished by the value of ocnt, the count of outstanding connect requests on the endpoint.

Table 9–3 Incoming Events

Event 

Description 

Service Type 

listen

Successful return of t_listen(3NSL)

T_COTS, T_COTS_ORD

rcvconnect

Successful return of t_rcvconnect(3NSL)

T_COTS, T_COTS_ORD

rcv

Successful return of t_rcv(3NSL)

T_COTS, T_COTS_ORD

rcvdis1

Successful return of t_rcvdis(3NSL) rcvdis1t_rcvdis(), onct <= 0

T_COTS, T_COTS_ORD

rcvdis2

Successful return of t_rcvdis(3NSL), ocnt == 1

T_COTS, T_COTS_ORD

rcvdis3

Successful return of t_rcvdis(3NSL) with ocnt > 1

T_COTS, T_COTS_ORD

rcvrel

Successful return of t_rcvrel(3NSL)

T_COTS_ORD

rcvudata

Successful return of t_rcvudata(3NSL)

T_CLTS

rcvuderr

Successful return of t_rcvuderr(3NSL)

T_CLTS

pass_conn

Receive a passed connection 

T_COTS, T_COTS_ORD

State Tables

The state tables describe the XTI/TLI state transitions. Each box contains the next state, given the current state (column) and the current event (row). An empty box is an invalid state/event combination. Each box can also have an action list. Actions must be done in the order specified in the box.

You should understand the following when studying the state tables:

Some of the state transitions listed in the tables below offer actions the transport user must take. Each action is represented by a digit derived from the list below:

The following table shows endpoint establishment states.

Table 9–4 Connection Establishment State

Event/State 

T_UNINIT

T_UNBND

T_IDLE

opened

T_UNBND

 

 

bind

 

T_IDLE[1]

 

optmgmt (TLI only)

 

 

T_IDLE

unbind

 

 

T_UNBND

closed

 

T_UNINIT

 

The following table shows data transfer in connection mode.

Table 9–5 Connection Mode State: Part 1

Event/State 

T_IDLE

T_OUTCON

T_INCON

T_DATAXFER

connect1

T_DATAXFER

 

 

 

connect2

T_OUTCON

 

 

 

rcvconnect

 

T_DATAXFER

 

 

listen

T_INCON [2]

 

T_INCON [2]

 

accept1

 

 

T_DATAXFER [3]

 

accept2

 

 

T_IDLE [3] [4]

 

accept3

 

 

T_INCON [3] [4]

 

snd

 

 

 

T_DATAXFER

rcv

 

 

 

T_DATAXFER

snddis1

 

T_IDLE

T_IDLE [3]

T_IDLE

snddis2

 

 

T_INCON [3]

 

rcvdis1

 

T_IDLE

 

T_IDLE

rcvdis2

 

 

T_IDLE [3]

 

rcvdis3

 

 

T_INCON [3]

 

sndrel

 

 

 

T_OUTREL

rcvrel

 

 

 

T_INREL

pass_conn

T_DATAXFER

 

 

 

optmgmt

T_IDLE

T_OUTCON

T_INCON

T_DATAXFER

closed

T_UNINIT

T_UNINIT

T_UNINIT

T_UNINIT

The following table shows connection establishment/connection release/data transfer in connection mode.

Table 9–6 Connection Mode State: Part 2

Event/State 

T_OUTREL

T_INREL

T_UNBND

connect1

 

 

 

connect2

 

 

 

rcvconnect

 

 

 

listen

 

 

 

accept1

 

 

 

accept2

 

 

 

accept3

 

 

 

snd

 

T_INREL

 

rcv

T_OUTREL

 

 

snddis1

T_IDLE

T_IDLE

 

snddis2

 

 

 

rcvdis1

T_IDLE

T_IDLE

 

rcvdis2

 

 

 

rcvdis3

 

 

 

sndrel

 

T_IDLE

 

rcvrel

T_IDLE

 

 

pass_conn

 

 

T_DATAXFER

optmgmt

T_OUTREL

T_INREL

T_UNBND

closed

T_UNINIT

T_UNINIT

 

The following table shows connectionless mode states.

Table 9–7 Connectionless Mode State

Event/State 

T_IDLE

snudata

T_IDLE

rcvdata

T_IDLE

rcvuderr

T_IDLE

Guidelines to Protocol Independence

The set of XTI/TLI services, common to many transport protocols, offers protocol independence to applications. Not all transport protocols support all XTI/TLI services. If software must run in a variety of protocol environments, use only the common services.

The following is a list of services that might not be common to all transport protocols.

XTI/TLI Versus Socket Interfaces

XTI/TLI and sockets are different methods of handling the same tasks. Although they provide mechanisms and services that are functionally similar, they do not provide one-to-one compatibility of routines or low-level services. Observe the similarities and differences between the XTI/TLI and socket-based interfaces before you decide to port an application.

The following issues are related to transport independence, and can have some bearing on RPC applications:

Socket-to-XTI/TLI Equivalents

The following table shows approximate equivalents between XTI/TLI interfaces and socket interfaces. The comment field describes the differences. If the comment column is blank, either the interfaces are similar or no equivalent interface exists in either interface.

Table 9–8 TLI and Socket Equivalent Functions

TLI interface 

Socket interface 

Comments 

t_open(3NSL)

socket(3SOCKET)

 

socketpair(3SOCKET)

 

t_bind(3NSL)

bind(3SOCKET)

t_bind(3NSL) sets the queue depth for passive sockets, but bind(3SOCKET) does not. For sockets, the queue length is specified in the call to listen(3SOCKET).

t_optmgmt(3NSL)

getsockopt(3SOCKET)

setsockopt(3SOCKET)

t_optmgmt(3NSL) manages only transport options. getsockopt(3SOCKET) and setsockopt(3SOCKET) can manage options at the transport layer, but also at the socket layer and at the arbitrary protocol layer.

t_unbind(3NSL)

 

t_close(3NSL)

close(2)

 

t_getinfo(3NSL)

getsockopt(3SOCKET)

t_getinfo(3NSL) returns information about the transport. getsockopt(3SOCKET) can return information about the transport and the socket.

t_getstate(3NSL)

-

 

t_sync(3NSL)

-

 

t_alloc(3NSL)

-

 

t_free(3NSL)

-

 

t_look(3NSL)

-

getsockopt(3SOCKET) with the SO_ERROR option returns the same kind of error information as t_look(3NSL)t_look().

t_error(3NSL)

perror(3C)

 

t_connect(3NSL)

connect(3SOCKET)

You do not need to bind the local endpoint before invoking connect(3SOCKET). Bind the endpoint before calling t_connect(3NSL). You can use connect(3SOCKET) on a connectionless endpoint to set the default destination address for datagrams. You can send data using connect(3SOCKET).

t_rcvconnect(3NSL)

-

 

t_listen(3NSL)

listen(3SOCKET)

t_listen(3NSL) waits for connection indications. listen(3SOCKET) sets the queue depth.

t_accept(3NSL)

accept(3SOCKET)

 

t_snd(3NSL)

send(3SOCKET)

 

 

sendto(3SOCKET)

 

 

sendmsg(3SOCKET)

sendto(3SOCKET) and sendmsg(3SOCKET) operate in connection mode as well as in datagram mode.

t_rcv(3NSL)

recv(3SOCKET)

 

 

recvfrom(3SOCKET)

 

 

recvmsg(3SOCKET)

recvfrom(3SOCKET) and recvmsg(3SOCKET) operate in connection mode as well as datagram mode.

t_snddis(3NSL)

-

 

t_rcvdis(3NSL)

-

 

t_sndrel(3NSL)

shutdown(3SOCKET)

 

t_rcvrel(3NSL)

-

 

t_sndudata(3NSL)

sendto(3SOCKET)

 

 

recvmsg(3SOCKET)

 

t_rcvuderr(3NSL)

-

 

read(2), write(2)

read(2), write(2)

In XTI/TLI you must push the tirdwr(7M) module before calling read(2) or write(2). In sockets, calling read(2) or write(2) suffices.

Additions to the XTI Interface

The XNS 5 (UNIX03) standard introduces some new XTI interfaces. These are briefly described below. You can find the details in the relevant manual pages. These interfaces are not available for TLI users. The scatter-gather data transfer interfaces are:

t_sndvudata(3NSL)

Send a data unit from one or more non-contiguous buffers

t_rcvvudata(3NSL)

Receive a data unit into one or more non-contiguous buffers

t_sndv(3NSL)

Send data or expedited data from one or more non-contiguous buffers on a connection

t_rcvv(3NSL)

Receive data or expedited data sent over a connection and put the data into one or more non-contiguous buffers

The XTI utility interface t_sysconf(3NSL) gets configurable XTI variables. The t_sndreldata(3NSL) interface initiates and responds to an orderly release with user data. The t_rcvreldata(3NSL) receives an orderly release indication or confirmation containing user data.


Note –

The additional interfaces t_sndreldata(3NSL) and t_rcvreldata(3NSL) are used only with a specific transport called minimal OSI, which is not available on the Solaris platform. These interfaces are not available for use in conjunction with Internet Transports (TCP or UDP).