ONC+ Developer's Guide

Porting From TS-RPC to TI-RPC

The transport-independent RPC (TI-RPC) routines allow the developer stratified levels of access to the transport layer. The highest-level routines provide complete abstraction from the transport and provide true transport-independence. Lower levels provide access levels similar to the TI-RPC of previous releases.

This section is an informal guide to porting transport-specific RPC (TS-RPC) applications to TI-RPC. Table 4-11 shows the differences between selected routines and their counterparts. For information on porting issues concerning sockets and transport layer interface (TLI), see the Transport Interfaces Programming Guide.

Porting an Application

An application based on either TCP or UDP can run in binary-compatibility mode. For some applications you only recompile and relink all source files. This may be true of applications that use simple RPC calls and use no socket or TCP or UDP specifics.

Some editing and new code may be needed if an application depends on socket semantics or features specific to TCP or UDP. Examples use the format of host addresses or rely on the Berkeley UNIX concept of privileged ports.

Applications that are dependent on the internals of the library or the socket implementation, or depend on specific transport addressing probably require more effort to port and may require substantial modification.

Benefits of Porting

Some of the benefits of porting are:

IPv6 Considerations for RPC

IPv6 is the successor of IPv4, the most commonly used layer 2 protocol in today's interne technology. IPv6 is also known as IP next generation (IPng). For more information, see System Administration Guide, Volume 3.

Both IPv4 and IPv6 are available to users. Applications choose which "stack" to use when using COTS (Connection-oriented-transport service). They can choose TCP or CLTS (connection-less-transport service).

The following figure illustrates a typical RPC application running over an IPv4 or IPv6 protocol stack.

Figure 4-4 RCP Applications

Graphic

IPv6 is supported only for TI-RPC applications. TS-RPC does not currently support IPv6. Transport selection in TI-RPC is governed by either the NETPATH environment variable or in /etc/netconfig. The selection of TCP or UDP instead of IPv4 or IPv6 is dependent on the order in which the corresponding entries appear in /etc/netconfig. There are two new entries associated with IPv6 in /etc/netconfig, and by default they are the first two entries of the file. TI-RPC first tries IPv6. Failing that, it falls back to IPv4. Doing so requires no change in the RPC application itself provided that it doesn't have any knowledge of the transport and is written using the top level interface.

clnt_create()  

svc_create() 

clnt_call() 

clnt_create_timed() 

This interface chooses IPv6 automatically if IPv6 is the first item in /etc/netconfig.

IPv6 enables application only uses RPCBIND protocol V3 and V4 to locate the service bar and number.

clnt_tli_create()  

svc_tli_create() 

clnt_dg_create() 

svc_dg_create() 

clnt_vc_create() 

svc_vc_create() 

It might be necessary to port the code if one of the above interfaces is used.

Porting Issues

libnsl Library

libc no longer includes networking functions. libnsl must be explicitly specified at compile time to link the network services routines.

Old Interfaces

Many old interfaces are supported in the libnsl library, but they work only with TCP or UDP transports. To take advantage of new transports, you must use the new interfaces.

Name-to-Address Mapping

Transport independence requires opaque addressing. This has implications for applications that interpret addresses.

Differences Between TI-RPC and TS-RPC

The major differences between transport-independent RPC and transport-specific RPC are illustrated in Table 4-11. Also see section "Comparison Examples" for code examples comparing TS-RPC with TI-RPC.

Table 4-11 Differences Between TI-RPC and TS-RPC

Category 

TI-RPC 

TS- RPC 

Default Transport Selection 

TI-RPC uses the TLI interface. 

TS-RPC uses the socket interface. 

RPC Address Binding 

TI-RPC uses rpcbind() for service binding. rpcbind() keeps address in universal address format.

TS-RPC uses portmap for service binding.

 
 

Transport Information 

Transport information is kept in a local file, /etc/netconfig. Any transport identified in netconfig is accessible.

Only TCP and UDP transports are supported. 

 
 

Loopback Transports 

rpcbind service requires a secure loopback transport for server registration

TS-RPC services do not require a loopback transport. 

 
 

Host Name Resolution 

The order of host name resolution in TI-RPC depends on the order of dynamic libraries identified by entries in /etc/netconfig.

Host name resolution is done by name services. The order is set by the state of the hosts database. 

File Descriptors 

Descriptors are assumed to be TLI endpoints. 

Descriptors are assumed to be sockets. 

rpcgen

The TI-RPC rpcgen tool adds support for multiple arguments, pass-by values, sample client files, and sample server files.

rpcgen in SunOS 4.1 and previous releases do not support the features listed for TI-RPC rpcgen.

Libraries 

TI-RPC requires that applications be linked to the libnsl library.

All TS-RPC functionality is provided in libc.

MT Support 

Multithreaded RPC clients and servers are supported. 

Multithreaded RPC is not supported. 

Function Compatibility Lists

The RPC library functions are listed in this section and grouped into functional areas. Each section includes lists of functions that are unchanged, have added functionality, and are new relative to previous releases.


Note -

Functions marked with an asterisk are retained for ease of porting and may be not be supported in future releases of Solaris.


Creating Client Handles

The following functions are unchanged from the previous release and available in the current SunOS release:

clnt_destroy
clnt_pcreateerror
*clntraw_create
clnt_spcreateerror
*clnttcp_create
*clntudp_bufcreate
*clntudp_create
clnt_control
clnt_create
clnt_create_timed
clnt_create_vers
clnt_dg_create
clnt_raw_create
clnt_tli_create
clnt_tp_create
clnt_tp_create_timed
clnt_vc_create

Creating and Destroying Services

The following functions are unchanged from the previous releases and available in the current SunOS release:

svc_destroy
svcfd_create
*svc_raw_create
*svc_tp_create
*svcudp_create
*svc_udp_bufcreate
svc_create
svc_dg_create
svc_fd_create
svc_raw_create
svc_tli_create
svc_tp_create
svc_vc_create

Registering and Unregistering Services

The following functions are unchanged from the previous releases and available in the current SunOS release:

*registerrpc
*svc_register
*svc_unregister
xprt_register
xprt_unregister
rpc_reg
svc_reg
svc_unreg

SunOS 4.x Compatibility Calls

The following functions are unchanged from previous releases and available in the current SunOS release:

*callrpc
clnt_call
*svc_getcaller - works only with IP-based transports
rpc_call
svc_getrpccaller

Broadcasting

The following call has the same functionality as in previous releases, although it is supported for backward compatibility only:

*clnt_broadcast

clnt_broadcast() can broadcast only to the portmap service. It does not support rpcbind.

The following function that broadcasts to both portmap and rpcbind is also available in the current release of SunOS:

rpc_broadcast

Address Management Functions

The TI-RPC library functions interface with either portmap or rpcbind. Since the services of the programs differ, there are two sets of functions, one for each service.

The following functions work with portmap:

pmap_set
pmap_unset
pmap_getport
pmap_getmaps
pmap_rmtcall

The following functions work with rpcbind:

rpcb_set
rpcb_unset
rpcb_getaddr
rpcb_getmaps
rpcb_rmtcall

Authentication Functions

The following calls have the same functionality as in previous releases. They are supported for backward compatibility only:

authdes_create
authunix_create
authunix_create_default
authdes_seccreate
authsys_create
authsys_create_default

Other Functions

rpcbind provides a time service (primarily for use by secure RPC client-server time synchronization), available through the rpcb_gettime() function. pmap_getport() and rpcb_getaddr() can be used to get the port number of a registered service. rpcb_getaddr() communicates with any server running version 2, 3, or 4 of rcpbind. pmap_getport() can only communicate with version 2.

Comparison Examples

The changes in client creation from TS-RPC to TI-RPC are illustrated in Example 4-47 and Example 4-48. Each example


Example 4-47 Client Creation in TS-RPC

	struct hostent *h;
	struct sockaddr_in sin;
	int sock = RPC_ANYSOCK;
	u_short port;
	struct timeval wait;

	if ((h = gethostbyname( "host" )) == (struct hostent *) NULL)
{
		syslog(LOG_ERR, "gethostbyname failed");
		exit(1);
	}
	sin.sin_addr.s_addr = *(u_int *) hp->h_addr;
	if ((port = pmap_getport(&sin, PROGRAM, VERSION, "udp")) == 0) {
		syslog (LOG_ERR, "pmap_getport failed");
		exit(1);
	} else
		sin.sin_port = htons(port);
	wait.tv_sec = 25;
	wait.tv_usec = 0;
	clntudp_create(&sin, PROGRAM, VERSION, wait, &sock);

The TI-RPC version assumes that the UDP transport has the netid udp. A netid is not necessarily a well-known name.


Example 4-48 Client Creation in TI-RPC

	struct netconfig *nconf;
	struct netconfig *getnetconfigent();
	struct t_bind *tbind;
	struct timeval wait;

	nconf = getnetconfigent("udp");
	if (nconf == (struct netconfig *) NULL) {
		syslog(LOG_ERR, "getnetconfigent for udp failed");
		exit(1);
	}
	fd = t_open(nconf->nc_device, O_RDWR, (struct t_info *)NULL);
	if (fd == -1) {
		syslog(LOG_ERR, "t_open failed");
		exit(1);
	}
	tbind = (struct t_bind *) t_alloc(fd, T_BIND, T_ADDR);
	if (tbind == (struct t_bind *) NULL) {
		syslog(LOG_ERR, "t_bind failed");
		exit(1);
	}
	if (rpcb_getaddr( PROGRAM, VERSION, nconf, &tbind->addr, "host")
								== FALSE) {
		syslog(LOG_ERR, "rpcb_getaddr failed");
		exit(1);
	}
	cl = clnt_tli_create(fd, nconf, &tbind->addr, PROGRAM, VERSION,
	                      0, 0);
	(void) t_free((char *) tbind, T_BIND);
	if (cl == (CLIENT *) NULL) {
		syslog(LOG_ERR, "clnt_tli_create failed");
		exit(1);
	}
	wait.tv_sec = 25;
	wait.tv_usec = 0;
	clnt_control(cl, CLSET_TIMEOUT, (char *) &wait);

Example 4-49 and Example 4-50 show the differences between broadcast in TS-RPC and TI-RPC. The older clnt_broadcast() is similar to the newer rpc_broadcast(). The primary difference is in the collectnames() function: deletes duplicate addresses and displays the names of hosts that reply to the broadcast.


Example 4-49 Broadcast in TS-RPC

statstime sw;
extern int collectnames();

clnt_broadcast(RSTATPROG, RSTATVERS_TIME, RSTATPROC_STATS,         
    	xdr_void, NULL, xdr_statstime, &sw, collectnames);
	...
collectnames(resultsp, raddrp)
	char *resultsp;
	struct sockaddr_in *raddrp;
{
	u_int addr;
	struct entry *entryp, *lim;
	struct hostent *hp;
	extern int curentry;

	/* weed out duplicates */
	addr = raddrp->sin_addr.s_addr;
	lim = entry + curentry;
	for (entryp = entry; entryp < lim; entryp++)
		if (addr == entryp->addr)
			return (0);
	...
	/* print the host's name (if possible) or address */
	hp = gethostbyaddr(&raddrp->sin_addr.s_addr, sizeof(u_int),
	    AF_INET);
	if( hp == (struct hostent *) NULL)
		printf("0x%x", addr);
	else
		printf("%s", hp->h_name);
}

Example 4-50 shows the Broadcast for TI-RPC:


Example 4-50 Broadcast in TI-RPC

statstime sw;
extern int collectnames();

rpc_broadcast(RSTATPROG, RSTATVERS_TIME, RSTATPROC_STATS,
     xdr_void, NULL, xdr_statstime, &sw, collectnames, (char *)
0);
	...

collectnames(resultsp, taddr, nconf)
	char *resultsp;
	struct t_bind *taddr;
	struct netconfig *nconf;
{
	struct entry *entryp, *lim;
	struct nd_hostservlist *hs;
	extern int curentry;
	extern int netbufeq();

	/* weed out duplicates */
	lim = entry + curentry;
	for (entryp = entry; entryp < lim; entryp++)
		if (netbufeq( &taddr->addr, entryp->addr))
			return (0);
	...
	/* print the host's name (if possible) or address */
	if (netdir_getbyaddr( nconf, &hs, &taddr->addr ) == ND_OK)
		printf("%s", hs->h_hostservs->h_host);
	else {
		char *uaddr = taddr2uaddr(nconf, &taddr->addr);
		if (uaddr) {
			printf("%s\n", uaddr);
			(void) free(uaddr);
		} else
			printf("unknown");
	}
}
netbufeq(a, b)
	struct netbuf *a, *b;
{
	return(a->len == b->len && !memcmp( a->buf, b->buf, a->len));
}