JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
ONC+ Developer's Guide     Oracle Solaris 11 Information Library
search filter icon
search icon

Document Information

Preface

1.  Introduction to ONC+ Technologies

2.  Introduction to TI-RPC

3.  rpcgen Programming Guide

4.  Programmer's Interface to RPC

Simplified Interface

Client Side of Simplified Interface

Server Side of the Simplified Interface

Hand-Coded Registration Routine

Passing Arbitrary Data Types

Standard Interfaces

Top-Level Interface

Client Side of the Top-Level Interface

Intermediate-Level Interface

Client Side of the Intermediate-Level Interface

Server Side of the Intermediate-Level Interface

Expert-Level Interface

Client Side of the Expert-Level Interface

Server Side of the Expert-Level Interface

Bottom-Level Interface

Client Side of the Bottom-Level Interface

Server Side of the Bottom-Level Interface

Server Caching

Low-Level Data Structures

Testing Programs Using Low-Level Raw RPC

Connection-Oriented Transports

Memory Allocation With XDR

5.  Advanced RPC Programming Techniques

6.  Porting From TS-RPC to TI-RPC

7.  Multithreaded RPC Programming

8.  Extensions to the Oracle Solaris RPC Library

A.  XDR Technical Note

B.  RPC Protocol and Language Specification

C.  XDR Protocol Specification

D.  RPC Code Examples

E.  portmap Utility

Glossary

Index

Standard Interfaces

Interfaces to standard levels of the RPC package provide increasing control over RPC communications. Programs that use this control are more complex. Effective programming at these lower levels requires more knowledge of computer network fundamentals. The top, intermediate, expert, and bottom levels are part of the standard interfaces.

This section describes how to control RPC details by using lower levels of the RPC library. For example, you can select the transport protocol, which can be done at the simplified interface level only through the NETPATH variable. You should be familiar with the top-level interface (TLI) in order to use these routines.

The routines shown below cannot be used through the simplified interface because they require a transport handle. For example, there is no way to allocate and free memory while serializing or deserializing with XDR routines at the simplified interface.

clnt_call()
clnt_destroy()
clnt_control()
clnt_perrno()
clnt_pcreateerror()
clnt_perror()
svc_destroy()

Top-Level Interface

At the top level, the application can specify the type of transport to use but not the specific transport. This level differs from the simplified interface in that the application creates its own transport handles in both the client and server.

Client Side of the Top-Level Interface

Assume the header file in the following code example.

Example 4-7 time_prot.h Header File

/* time_prot.h */

#include <rpc/rpc.h>
#include <rpc/types.h>

struct timev {
    int second;
    int minute;
    int hour;
}; typedef struct timev timev;
bool_t xdr_timev();

#define TIME_PROG 0x40000001
#define TIME_VERS 1
#define TIME_GET 1

The following example shows the client side of a trivial date service using top-level service routines. The transport type is specified as an invocation argument of the program.

Example 4-8 Client for Trivial Date Service

#include <stdio.h>
#include "time_prot.h"
 
#define TOTAL (30)
/*
 * Caller of trivial date service
 * usage: calltime hostname
 */
main(argc, argv)
    int argc;
    char *argv[];
{
    struct timeval time_out;
    CLIENT *client;
    enum clnt_stat stat;
    struct timev timev;
    char *nettype;
 
    if (argc != 2 && argc != 3) {
        fprintf(stderr,”usage:%s host[nettype]\n” ,argv[0]);
        exit(1);
    }
    if (argc == 2)
        nettype = "netpath";        /* Default */    
    else
        nettype = argv[2];
    client = clnt_create(argv[1], TIME_PROG, TIME_VERS, nettype);
    if (client == (CLIENT *) NULL) {
        clnt_pcreateerror(“Couldn't create client”);
        exit(1);
    }
    time_out.tv_sec = TOTAL;
    time_out.tv_usec = 0;
    stat = clnt_call( client, TIME_GET, 
        xdr_void, (caddr_t)NULL,
        xdr_timev, (caddr_t)&timev,
        time_out);
    if (stat != RPC_SUCCESS) {
        clnt_perror(client, "Call failed");
        exit(1);
    }
    fprintf(stderr,"%s: %02d:%02d:%02d GMT\n",
        nettype timev.hour, timev.minute,
        timev.second);
    (void) clnt_destroy(client);
    exit(0);    
}

If nettype is not specified in the invocation of the program, the string netpath is substituted. When RPC libraries routines encounter this string, the value of the NETPATH environment variable governs transport selection.

If the client handle cannot be created, display the reason for the failure with clnt_pcreateerror(). You can also get the error status by reading the contents of the global variable rpc_createerr.

After the client handle is created, clnt_call() is used to make the remote call. Its arguments are the remote procedure number, an XDR filter for the input argument, the argument pointer, an XDR filter for the result, the result pointer, and the time-out period of the call. The program has no arguments, so xdr_void() is specified. Clean up by calling clnt_destroy().

To bound the time allowed for client handle creation in the previous example to 30 seconds, replace the call to clnt_create() with a call to clnt_create_timed() as shown in the following code segment:

struct timeval timeout;
timeout.tv_sec = 30;        /* 30 seconds */
timeout.tv_usec = 0;

client = clnt_create_timed(argv[1], TIME_PROG, TIME_VERS, nettype,
                           &timeout);

The following example shows a top-level implementation of a server for the trivial date service.

Example 4-9 Server for Trivial Date Service

#include <stdio.h>
#include <rpc/rpc.h>
#include "time_prot.h"
 
static void time_prog();
 
main(argc,argv)
    int argc;
    char *argv[];
{
    int transpnum;
    char *nettype;
 
    if (argc > 2) {
        fprintf(stderr, "usage: %s [nettype]\n", argv[0]);
        exit(1);
    }
    if (argc == 2)
        nettype = argv[1];
    else
        nettype = "netpath";        /* Default */
    transpnum = svc_create(time_prog,TIME_PROG,TIME_VERS,nettype);
    if (transpnum == 0) {
        fprintf(stderr,”%s: cannot create %s service.\n”,
                    argv[0], nettype);
        exit(1);
    }
    svc_run();
}
 
/*
 * The server dispatch function
 */
static void
time_prog(rqstp, transp)
    struct svc_req *rqstp;
    SVCXPRT *transp;
{
    struct timev rslt;
    time_t thetime;
 
    switch(rqstp->rq_proc) {
        case NULLPROC:
            svc_sendreply(transp, xdr_void, NULL);
            return;
        case TIME_GET:
            break;
        default:
            svcerr_noproc(transp);
            return;
        }
    thetime = time((time_t *) 0);
    rslt.second = thetime % 60;
    thetime /= 60;
    rslt.minute = thetime % 60;
    thetime /= 60;
    rslt.hour = thetime % 24;
    if (!svc_sendreply( transp, xdr_timev, &rslt)) {
        svcerr_systemerr(transp);
        }
}

svc_create() returns the number of transports on which it created server handles. time_prog() is the service function called by svc_run() when a request specifies its program and version numbers. The server returns the results to the client through svc_sendreply().

When you use rpcgen to generate the dispatch function, svc_sendreply() is called after the procedure returns. Therefore, rslt in this example must be declared static in the actual procedure. svc_sendreply() is called from inside the dispatch function, so rslt is not declared static.

In this example, the remote procedure takes no arguments. When arguments must be passed, the calls listed below fetch, deserialize (XDR decode), and free the arguments.

svc_getargs( SVCXPRT_handle, XDR_filter, argument_pointer);
svc_freeargs( SVCXPRT_handle, XDR_filter argument_pointer );

Intermediate-Level Interface

At the intermediate level, the application directly chooses the transport to use.

Client Side of the Intermediate-Level Interface

The following example shows the client side of the time service from Top-Level Interface, written at the intermediate level of RPC. In this example, the user must name the transport over which the call is made on the command line.

Example 4-10 Client for Time Service, Intermediate Level

#include <stdio.h>
#include <rpc/rpc.h>
#include <netconfig.h>        /* For netconfig structure */
#include "time_prot.h"
 
#define TOTAL (30)
 
main(argc,argv)
    int argc;
    char *argv[];
{
    CLIENT *client;
    struct netconfig *nconf;
    char *netid;
    /* Declarations from previous example */
 
    if (argc != 3) {
        fprintf(stderr, "usage: %s host netid\n”, argv[0]);
        exit(1);
    }
    netid = argv[2];
    if ((nconf = getnetconfigent( netid)) ==
        (struct netconfig *) NULL) {
        fprintf(stderr, "Bad netid type: %s\n", netid);
        exit(1);
    }
    client = clnt_tp_create(argv[1], TIME_PROG,
                                        TIME_VERS, nconf);
    if (client == (CLIENT *) NULL) {
        clnt_pcreateerror("Could not create client");
        exit(1);
    }
    freenetconfigent(nconf);
 
    /* Same as previous example after this point */
}

In this example, the netconfig structure is obtained by a call to getnetconfigent(netid). See the getnetconfig(3NSL) man page and Programming Interfaces Guide for more details. At this level, the program explicitly selects the network.

To bound the time allowed for client handle creation in the previous example to 30 seconds, replace the call to clnt_tp_create() with a call to clnt_tp_create_timed() as shown in the following code segment:

 struct timeval timeout;
 timeout.tv_sec = 30; /* 30 seconds */
 timeout.tv_usec = 0;

 client = clnt_tp_create_timed(argv[1], 
                TIME_PROG, TIME_VERS, nconf,
                &timeout);

Server Side of the Intermediate-Level Interface

The following example shows the corresponding server. The command line that starts the service must specify the transport over which the service is provided.

Example 4-11 Server for Time Service, Intermediate Level

/*
 * This program supplies Greenwich mean
 * time to the client that invokes it.
 * The call format is: server netid
 */
#include <stdio.h>
#include <rpc/rpc.h>

#include <netconfig.h>    /* For netconfig structure */
#include "time_prot.h"
 
static void time_prog();
 
main(argc, argv)
    int argc;
    char *argv[];
{
    SVCXPRT *transp;
    struct netconfig *nconf;
 
    if (argc != 2) {
        fprintf(stderr, "usage: %s netid\n", argv[0]);
        exit(1);
    }
    if ((nconf = getnetconfigent( argv[1])) ==
                                      (struct netconfig *) NULL) {
        fprintf(stderr, "Could not find info on %s\n", argv[1]);
        exit(1);
    }
    transp = svc_tp_create(time_prog, TIME_PROG,
                                        TIME_VERS, nconf);
    if (transp == (SVCXPRT *) NULL) {
        fprintf(stderr, "%s: cannot create %s service\n",
                        argv[0], argv[1]);
        exit(1)
    }
    freenetconfigent(nconf);
    svc_run();
}
 
    static
    void time_prog(rqstp, transp)
        struct svc_req *rqstp;
        SVCXPRT *transp;
{
/* Code identical to Top Level version */

Expert-Level Interface

At the expert level, network selection is done the same as at the intermediate level. The only difference is in the increased level of control that the application has over the details of the CLIENT and SVCXPRT handles. These examples illustrate this control, which is exercised using the clnt_tli_create() and svc_tli_create() routines. For more information on TLI, see Programming Interfaces Guide.

Client Side of the Expert-Level Interface

Example 4-12 shows a version of clntudp_create(), the client creation routine for UDP transport, using clnt_tli_create(). The example shows how to do network selection based on the family of the transport you choose. clnt_tli_create() is used to create a client handle and to:

Example 4-12 Client for RPC Lower Level

#include <stdio.h>
#include <rpc/rpc.h>
#include <netconfig.h>
#include <netinet/in.h>
/*
 * In earlier implementations of RPC,
 * only TCP/IP and UDP/IP were supported.
 * This version of clntudp_create()
 * is based on TLI/Streams.
 */
CLIENT *
clntudp_create(raddr, prog, vers, wait, sockp)
    struct sockaddr_in *raddr;        /* Remote address */
    rpcprog_t prog;                /* Program number */
    prcvers_t vers;                /* Version number */
    struct timeval wait;            /* Time to wait */
    int *sockp;                                /* fd pointer */
{
    CLIENT *cl;                                /* Client handle */
    int madefd = FALSE;            /* Is fd opened here */
    int fd = *sockp;            /* TLI fd */
    struct t_bind *tbind;            /* bind address */
    struct netconfig *nconf;        /* netconfig structure */
    void *handlep;
 
    if ((handlep = setnetconfig() ) == (void *) NULL) {
        /* Error starting network configuration */
        rpc_createerr.cf_stat = RPC_UNKNOWNPROTO;
        return((CLIENT *) NULL);
    }
    /*
     * Try all the transports until it gets one that is
     * connectionless, family is INET, and preferred name is UDP
     */
    while (nconf = getnetconfig( handlep)) {
        if ((nconf->nc_semantics == NC_TPI_CLTS) &&

             (strcmp( nconf->nc_protofmly, NC_INET ) == 0) &&
             (strcmp( nconf->nc_proto, NC_UDP ) == 0))
         break;
    }
    if (nconf == (struct netconfig *) NULL)
        rpc_createerr.cf_stat = RPC_UNKNOWNPROTO;
        goto err;
    }
    if (fd == RPC_ANYFD) {
        fd = t_open(nconf->nc_device, O_RDWR, &tinfo);
        if (fd == -1) {
            rpc_createerr.cf_stat = RPC_SYSTEMERROR;
            goto err;
        }
    }
    if (raddr->sin_port == 0) { /* remote addr unknown */
        u_short sport;
        /*
         * rpcb_getport() is a user-provided routine that calls
         * rpcb_getaddr and translates the netbuf address to port
         * number in host byte order.
         */
        sport = rpcb_getport(raddr, prog, vers, nconf);
        if (sport == 0) {
            rpc_createerr.cf_stat = RPC_PROGUNAVAIL;
            goto err;
        }
        raddr->sin_port = htons(sport);
    }
    /* Transform sockaddr_in to netbuf */
    tbind = (struct t_bind *) t_alloc(fd, T_BIND, T_ADDR);
    if (tbind == (struct t_bind *) NULL)
        rpc_createerr.cf_stat = RPC_SYSTEMERROR;
        goto err;
    }
    if (t_bind->addr.maxlen < sizeof( struct sockaddr_in))
        goto err;
    (void) memcpy( tbind->addr.buf, (char *)raddr,
                   sizeof(struct sockaddr_in));
    tbind->addr.len = sizeof(struct sockaddr_in);
    /* Bind fd */
    if (t_bind( fd, NULL, NULL) == -1) {
        rpc_createerr.ct_stat = RPC_TLIERROR;
        goto err;
    }
    cl = clnt_tli_create(fd, nconf, &(tbind->addr), prog, vers,
                          tinfo.tsdu, tinfo.tsdu);
    /* Close the netconfig file */
    (void) endnetconfig( handlep);
    (void) t_free((char *) tbind, T_BIND);
    if (cl) {
        *sockp = fd;
        if (madefd == TRUE) {
            /* fd should be closed while destroying the handle */
            (void)clnt_control(cl,CLSET_FD_CLOSE, (char *)NULL);
        }
        /* Set the retry time */
        (void) clnt_control( l, CLSET_RETRY_TIMEOUT,
                             (char *) &wait);
        return(cl);
    }
err:
    if (madefd == TRUE)
        (void) t_close(fd);
    (void) endnetconfig(handlep);
    return((CLIENT *) NULL);
}

The network is selected using setnetconfig(), getnetconfig(), and endnetconfig(). endnetconfig() is not called until after the call to clnt_tli_create(), near the end of the example.

clntudp_create() can be passed an open TLI fd. If passed none (fd == RPC_ANYFD), clntudp_create() opens its own using the netconfig structure for UDP to find the name of the device to pass to t_open().

If the remote address is not known (raddr->sin_port == 0), it is obtained from the remote rpcbind.

After the client handle has been created, you can modify it using calls to clnt_control(). The RPC library closes the file descriptor when destroying the handle, as it does with a call to clnt_destroy() when it opens the fd itself. The RPC library then sets the retry timeout period.

Server Side of the Expert-Level Interface

Example 4-13 shows the server side of Example 4-12. It is called svcudp_create(). The server side uses svc_tli_create().

svc_tli_create() is used when the application needs a fine degree of control, particularly to:

Use rpcb_set() to register the service with rpcbind.

Example 4-13 Server for RPC Lower Level

#include <stdio.h>
#include <rpc/rpc.h>
#include <netconfig.h>
#include <netinet/in.h>
 
SVCXPRT *
svcudp_create(fd)
    register int fd;
{
    struct netconfig *nconf;
    SVCXPRT *svc;
    int madefd = FALSE;
    int port;
    void *handlep;
    struct  t_info tinfo;
 
    /* If no transports available */
    if ((handlep = setnetconfig() ) == (void *) NULL) {
        nc_perror("server");
        return((SVCXPRT *) NULL);
    }
    /*
     * Try all the transports until it gets one which is
     * connectionless, family is INET and, name is UDP
     */
    while (nconf = getnetconfig( handlep)) {
        if ((nconf->nc_semantics == NC_TPI_CLTS) &&
            (strcmp( nconf->nc_protofmly, NC_INET) == 0 )&&
            (strcmp( nconf->nc_proto, NC_UDP) == 0 ))
            break;
    }
    if (nconf == (struct netconfig *) NULL) {
        endnetconfig(handlep);
        return((SVCXPRT *) NULL);
    }
    if (fd == RPC_ANYFD) {
        fd = t_open(nconf->nc_device, O_RDWR, &tinfo);
        if (fd == -1) {
            (void) endnetconfig();
            return((SVCXPRT *) NULL);
        }
        madefd = TRUE;
    } else
        t_getinfo(fd, &tinfo);
    svc = svc_tli_create(fd, nconf, (struct t_bind *) NULL,
                          tinfo.tsdu, tinfo.tsdu);
    (void) endnetconfig(handlep);
    if (svc == (SVCXPRT *) NULL) {
        if (madefd)
            (void) t_close(fd);
        return((SVCXPRT *)NULL);
    }
    return (svc);
}

The network selection here is accomplished similar to clntudp_create(). The file descriptor is not bound explicitly to a transport address because svc_tli_create() does that.

svcudp_create() can use an open fd. It opens one itself using the selected netconfig structure if none is provided.

Bottom-Level Interface

The bottom-level interface to RPC enables the application to control all options. clnt_tli_create() and the other expert-level RPC interface routines are based on these routines. You rarely use these low-level routines.

Bottom-level routines create internal data structures, buffer management, RPC headers, and so on. Callers of these routines, like the expert-level routine clnt_tli_create(), must initialize the cl_netid and cl_tp fields in the client handle. For a created handle, cl_netid is the network identifier (for example, udp) of the transport and cl_tp is the device name of that transport (for example, /dev/udp). The routines clnt_dg_create() and clnt_vc_create() set the clnt_ops and cl_private fields.

Client Side of the Bottom-Level Interface

The following code example shows calls to clnt_vc_create() and clnt_dg_create().

Example 4-14 Client for Bottom Level

/*
 * variables are:
 * cl: CLIENT *
 * tinfo: struct t_info returned from either t_open or t_getinfo
 * svcaddr: struct netbuf *
 */
    switch(tinfo.servtype) {
        case T_COTS:
        case T_COTS_ORD:
            cl = clnt_vc_create(fd, svcaddr,
             prog, vers, sendsz, recvsz);
            break;
        case T_CLTS:
            cl = clnt_dg_create(fd, svcaddr,
             prog, vers, sendsz, recvsz);
            break;
        default:
            goto err;
    }

These routines require that the file descriptor be open and bound. svcaddr is the address of the server.

Server Side of the Bottom-Level Interface

The following code example is an example of creating a bottom-level server.

Example 4-15 Server for Bottom Level

/*
 * variables are:
 * xprt: SVCXPRT *
 */
switch(tinfo.servtype) {
    case T_COTS_ORD:
    case T_COTS:
        xprt = svc_vc_create(fd, sendsz, recvsz);
        break;
    case T_CLTS:
        xprt = svc_dg_create(fd, sendsz, recvsz);
        break;
    default:
        goto err;
}

Server Caching

svc_dg_enablecache() initiates service caching for datagram transports. Caching should be used only in cases where a server procedure is a “once only” kind of operation. Executing a cached server procedure multiple times yields different results.

svc_dg_enablecache(xprt, cache_size)
    SVCXPRT *xprt;
    unsigned int cache_size;

This function allocates a duplicate request cache for the service endpoint xprt, large enough to hold cache_size entries. A duplicate request cache is needed if the service contains procedures with varying results. After caching is enabled, it cannot be disabled.

Low-Level Data Structures

The following data structure information is for reference only. The implementation might change.

The first structure is the client RPC handle, defined in <rpc/clnt.h>. Low-level implementations must provide and initialize one handle per connection, as shown in the following code example.

Example 4-16 RPC Client Handle Structure

typedef struct {
    AUTH *cl_auth;                                /* authenticator */
    struct clnt_ops {
        enum clnt_stat    (*cl_call)();      /* call remote procedure */
        void        (*cl_abort)();      /* abort a call */
        void        (*cl_geterr)();      /* get specific error code */
        bool_t        (*cl_freeres)();  /* frees results */
        void        (*cl_destroy)();  /* destroy this structure */
        bool_t        (*cl_control)();  /* the ioctl() of rpc */
    } *cl_ops;
    caddrt_t        cl_private;      /* private stuff */
    char            *cl_netid;      /* network token */
    char            *cl_tp;          /* device name */
} CLIENT;

The first field of the client-side handle is an authentication structure, defined in <rpc/auth.h>. By default, this field is set to AUTH_NONE. A client program must initialize cl_auth appropriately, as shown in the following code example.

Example 4-17 Client Authentication Handle

typedef struct {
    struct        opaque_auth  ah_cred;
    struct        opaque_auth  ah_verf;
    union        des_block    ah_key;
    struct auth_ops {
        void    (*ah_nextverf)();
        int    (*ah_marshal)();   /* nextverf & serialize */
        int    (*ah_validate)();  /* validate varifier */
        int    (*ah_refresh)();   /* refresh credentials */
        void    (*ah_destroy)();   /* destroy this structure */
    } *ah_ops;
    caddr_t ah_private;
} AUTH;

In the AUTH structure, ah_cred contains the caller's credentials, and ah_verf contains the data to verify the credentials. See Authentication for details.

The following code example shows the server transport handle.

Example 4-18 Server Transport Handle

typedef struct {
    int        xp_fd;
#define xp_sock        xp_fd
    u_short xp_port;        /* associated port number. Obsoleted */
    struct xp_ops {
        bool_t    (*xp_recv)();        /* receive incoming requests */
        enum xprt_stat (*xp_stat)();    /* get transport status */
        bool_t    (*xp_getargs)();    /* get arguments */
        bool_t    (*xp_reply)();        /* send reply */
        bool_t    (*xp_freeargs)();    /* free mem alloc for args */
        void    (*xp_destroy)();    /* destroy this struct */
    } *xp_ops;
    int        xp_addrlen;        /* length of remote addr. Obsolete */
    char        *xp_tp;            /* transport provider device name */
    char        *xp_netid;        /* network token */
    struct netbuf   xp_ltaddr;        /* local transport address */
    struct netbuf   xp_rtaddr;        /* remote transport address */
    char        xp_raddr[16];    /* remote address. Now obsoleted */
    struct opaque_auth xp_verf;    /* raw response verifier */
    caddr_t        xp_p1;        /* private: for use by svc ops */
    caddr_t        xp_p2;        /* private: for use by svc ops */
    caddr_t        xp_p3;        /* private: for use by svc lib */
} SVCXPRT;

The following table shows the fields for the server transport handle.

xp_fd
The file descriptor associated with the handle. Two or more server handles can share the same file descriptor.
xp_netid
The network identifier (for example, udp) of the transport on which the handle is created and xp_tp is the device name associated with that transport.
xp_ltaddr
The server's own bind address.
xp_rtaddr
The address of the remote caller (and so can change from call to call).
xp_netid xp_tp xp_ltaddr
Initialized by svc_tli_create() and other expert-level routines.

The rest of the fields are initialized by the bottom-level server routines svc_dg_create() and svc_vc_create().

For connection-oriented endpoints, the following fields are not valid until a connection has been requested and accepted for the server:

xp_fd
xp_ops()
xp_p1()
xp_p2
xp_verf()
xp_tp()
xp_ltaddr
xp_rtaddr()
xp_netid()