ONC+ Developer's Guide

Chapter 6 Porting From TS-RPC to TI-RPC

The transport-independent RPC (TI-RPC) routines provide the developer with stratified levels of access to the transport layer. The highest-level routines provide complete abstraction from the transport and provide true transport-independence. Lower levels provide access levels similar to the TI-RPC of previous releases.

This section is an informal guide to porting transport-specific RPC (TS-RPC) applications to TI-RPC. Table 6–1 shows the differences between selected routines and their counterparts. For information on porting issues concerning sockets and transport layer interface (TLI), see the Programming Interfaces Guide.

Porting an Application

An application based on either TCP or UDP can run in binary-compatibility mode. For some applications you only recompile and relink all source files. Such applications might use simple RPC calls and use no socket or TCP or UDP specifics.

You might need to edit code and write new code if an application depends on socket semantics or features specific to TCP or UDP. For example, the code might use the format of host addresses or rely on the Berkeley UNIX concept of privileged ports.

Applications that are dependent on the internals of the library or the socket implementation, or applications that depend on specific transport addressing, probably require more effort to port and might require substantial modification.

Benefits of Porting

Some of the benefits of porting are:

IPv6 Considerations for RPC

IPv6 is the successor of IPv4, the most commonly used layer 2 protocol. IPv6 is also known as IP next generation (IPng). For more information, see System Administration Guide: IP Services.

Both IPv4 and IPv6 are available to users. Applications choose which stack to use when using COTS (connection-oriented transport service). They can choose TCP or CLTS (connectionless transport service).

The following figure illustrates a typical RPC application running over an IPv4 or IPv6 protocol stack.

Figure 6–1 RPC Applications

The RPC applications use TCP or UDP, each of which can use either an IPv4 or IPv6 stack to reach the network.

IPv6 is supported only for TI-RPC applications. TS-RPC does not currently support IPv6. Transport selection in TI-RPC is governed either by the NETPATH environment variable or in /etc/netconfig.

The selection of TCP or UDP instead of IPv4 or IPv6 is dependent on the order in which the corresponding entries appear in /etc/netconfig. Two new entries are associated with IPv6 in /etc/netconfig, and by default they are the first two entries of the file. TI-RPC first tries IPv6. Failing that, it falls back to IPv4. Doing so requires no change in the RPC application itself provided that it doesn't have any knowledge of the transport and is written using the top-level interface.

Porting Issues

Differences Between TI-RPC and TS-RPC

The major differences between transport-independent RPC and transport-specific RPC are illustrated in the following table. Also see Comparison Examples for code examples comparing TS-RPC with TI-RPC.

Table 6–1 Differences Between TI-RPC and TS-RPC

Category 

TI-RPC 

TS-RPC 

Default Transport Selection 

TI-RPC uses the TLI interface. 

TS-RPC uses the socket interface. 

RPC Address Binding 

TI-RPC uses rpcbind() for service binding. rpcbind() keeps address in universal address format.

TS-RPC uses portmap for service binding.

Transport Information 

Transport information is kept in a local file, /etc/netconfig. Any transport identified in netconfig is accessible.

Only TCP and UDP transports are supported. 

Loopback Transports 

rpcbind service requires a secure loopback transport for server registration.

TS-RPC services do not require a loopback transport. 

Host Name Resolution 

The order of host-name resolution in TI-RPC depends on the order of dynamic libraries identified by entries in /etc/netconfig.

Host-name resolution is done by name services. The order is set by the state of the hosts database.

File Descriptors 

Descriptors are assumed to be TLI endpoints. 

Descriptors are assumed to be sockets. 

rpcgen

The TI-RPC rpcgen tool adds support for multiple arguments, pass-by values, sample client files, and sample server files.

rpcgen in SunOS 4.1 and previous releases does not support the features listed for TI-RPC rpcgen.

Libraries 

TI-RPC requires that applications be linked to the libnsl library.

All TS-RPC functionality is provided in libc.

MT Support 

Multithreaded RPC clients and servers are supported. 

Multithreaded RPC is not supported. 

Function Compatibility Lists

This section lists the RPC library functions and groups them into functional areas. Each section includes lists of functions that are unchanged, have added functionality, and are new to this release.


Note –

Functions marked with an asterisk are retained for ease of porting.


Creating and Destroying Services

The following functions are unchanged from the previous releases and are available in the current SunOS release:1

svc_destroy
svcfd_create
*svc_raw_create
*svc_tp_create
*svcudp_create
*svc_udp_bufcreate
svc_create
svc_dg_create
svc_fd_create
svc_raw_create
svc_tli_create
svc_tp_create
svc_vc_create

Registering and Unregistering Services

The following functions are unchanged from the previous releases and are available in the current SunOS release:

*registerrpc
*svc_register
*svc_unregister
xprt_register
xprt_unregister
rpc_reg
svc_reg
svc_unreg

SunOS Compatibility Calls

The following functions are unchanged from previous releases and are available in the current SunOS release:

*callrpc
clnt_call
*svc_getcaller - works only with IP-based transports
rpc_call
svc_getrpccaller

Broadcasting

The clnt_broadcast call has the same functionality as in previous releases, although it is supported for backward compatibility only.

clnt_broadcast() can broadcast only to the portmap service. It does not support rpcbind.

The rpc_broadcast function broadcasts to both portmap and rpcbind and is also available in the current SunOS release.

Address Management Functions

The TI-RPC library functions interface with either portmap or rpcbind. Because the services of the programs differ, there are two sets of functions, one for each service.

The following functions work with portmap:

pmap_set
pmap_unset
pmap_getport
pmap_getmaps
pmap_rmtcall

The following functions work with rpcbind:

rpcb_set
rpcb_unset
rpcb_getaddr
rpcb_getmaps
rpcb_rmtcall

Authentication Functions

The following calls have the same functionality as in previous releases. They are supported for backward compatibility only.

authdes_create
authunix_create
authunix_create_default
authdes_seccreate
authsys_create
authsys_create_default

Other Functions

rpcbind provides a time service, primarily for use by secure RPC client-server time synchronization, available through the rpcb_gettime() function. pmap_getport() and rpcb_getaddr() can be used to get the port number of a registered service. rpcb_getaddr() communicates with any server running version 2, 3, or 4 of rcpbind. pmap_getport() can only communicate with version 2.

Comparison Examples

The changes in client creation from TS-RPC to TI-RPC are illustrated in Example 6–1 and Example 6–2. Each example:


Example 6–1 Client Creation in TS-RPC

	struct hostent *h;
	struct sockaddr_in sin;
	int sock = RPC_ANYSOCK;
	u_short port;
	struct timeval wait;

	if ((h = gethostbyname( "host" )) == (struct hostent *) NULL)
{
		syslog(LOG_ERR, "gethostbyname failed");
		exit(1);
	}
	sin.sin_addr.s_addr = *(u_int *) hp->h_addr;
	if ((port = pmap_getport(&sin, PROGRAM, VERSION, "udp")) == 0) {
		syslog (LOG_ERR, "pmap_getport failed");
		exit(1);
	} else
		sin.sin_port = htons(port);
	wait.tv_sec = 25;
	wait.tv_usec = 0;
	clntudp_create(&sin, PROGRAM, VERSION, wait, &sock);

The TI-RPC version of client creation, shown in the following example, assumes that the UDP transport has the netid udp. A netid is not necessarily a well-known name.


Example 6–2 Client Creation in TI-RPC

	struct netconfig *nconf;
	struct netconfig *getnetconfigent();
	struct t_bind *tbind;
	struct timeval wait;

	nconf = getnetconfigent("udp");
	if (nconf == (struct netconfig *) NULL) {
		syslog(LOG_ERR, "getnetconfigent for udp failed");
		exit(1);
	}
	fd = t_open(nconf->nc_device, O_RDWR, (struct t_info *)NULL);
	if (fd == -1) {
		syslog(LOG_ERR, "t_open failed");
		exit(1);
	}
	tbind = (struct t_bind *) t_alloc(fd, T_BIND, T_ADDR);
	if (tbind == (struct t_bind *) NULL) {
		syslog(LOG_ERR, "t_bind failed");
		exit(1);
	}
	if (rpcb_getaddr( PROGRAM, VERSION, nconf, &tbind->addr, "host")
								== FALSE) {
		syslog(LOG_ERR, "rpcb_getaddr failed");
		exit(1);
	}
	cl = clnt_tli_create(fd, nconf, &tbind->addr, PROGRAM, VERSION,
	                      0, 0);
	(void) t_free((char *) tbind, T_BIND);
	if (cl == (CLIENT *) NULL) {
		syslog(LOG_ERR, "clnt_tli_create failed");
		exit(1);
	}
	wait.tv_sec = 25;
	wait.tv_usec = 0;
	clnt_control(cl, CLSET_TIMEOUT, (char *) &wait);

Example 6–3 and Example 6–4 show the differences between broadcast in TS-RPC and TI-RPC. The older clnt_broadcast() is similar to the newer rpc_broadcast(). The primary difference is in the collectnames() function: it deletes duplicate addresses and displays the names of hosts that reply to the broadcast.


Example 6–3 Broadcast in TS-RPC

statstime sw;
extern int collectnames();

clnt_broadcast(RSTATPROG, RSTATVERS_TIME, RSTATPROC_STATS,         
    	xdr_void, NULL, xdr_statstime, &sw, collectnames);
	...
collectnames(resultsp, raddrp)
	char *resultsp;
	struct sockaddr_in *raddrp;
{
	u_int addr;
	struct entry *entryp, *lim;
	struct hostent *hp;
	extern int curentry;

	/* weed out duplicates */
	addr = raddrp->sin_addr.s_addr;
	lim = entry + curentry;
	for (entryp = entry; entryp < lim; entryp++)
		if (addr == entryp->addr)
			return (0);
	...
	/* print the host's name (if possible) or address */
	hp = gethostbyaddr(&raddrp->sin_addr.s_addr, sizeof(u_int),
	    AF_INET);
	if( hp == (struct hostent *) NULL)
		printf("0x%x", addr);
	else
		printf("%s", hp->h_name);
}

The following code example shows broadcast in TI-RPC.


Example 6–4 Broadcast in TI-RPC

statstime sw;
extern int collectnames();

rpc_broadcast(RSTATPROG, RSTATVERS_TIME, RSTATPROC_STATS,
     xdr_void, NULL, xdr_statstime, &sw, collectnames, (char *)
0);
	...

collectnames(resultsp, taddr, nconf)
	char *resultsp;
	struct t_bind *taddr;
	struct netconfig *nconf;
{
	struct entry *entryp, *lim;
	struct nd_hostservlist *hs;
	extern int curentry;
	extern int netbufeq();

	/* weed out duplicates */
	lim = entry + curentry;
	for (entryp = entry; entryp < lim; entryp++)
		if (netbufeq( &taddr->addr, entryp->addr))
			return (0);
	...
	/* print the host's name (if possible) or address */
	if (netdir_getbyaddr( nconf, &hs, &taddr->addr ) == ND_OK)
		printf("%s", hs->h_hostservs->h_host);
	else {
		char *uaddr = taddr2uaddr(nconf, &taddr->addr);
		if (uaddr) {
			printf("%s\n", uaddr);
			(void) free(uaddr);
		} else
			printf("unknown");
	}
}
netbufeq(a, b)
	struct netbuf *a, *b;
{
	return(a->len == b->len && !memcmp( a->buf, b->buf, a->len));
}