RPC servers are built and configured in much that same way that ATMI Request/Response servers are. In fact, the service name space for RPC and Request/Response servers is the same. However, the names advertised for RPC services are different. For Request/Response servers, a service name is mapped to a procedure. For RPC servers, a service name is mapped to an IDL interface name. The RPC service advertised will be <interface>v<major>_<minor>, where
<interface> is the interface name, and
<major> and
<minor> are the major and minor numbers of the version, as specified (or defaulted to 0.0) in the interface definition. Because the service name is limited to 127 characters, this limits the length of the interface name to 13 characters minus the number of digits in the major and minor version numbers. This also implies that an exact match is used on major AND minor version numbers because of the way name serving is done in the Oracle Tuxedo system. Note that the interface, and not individual operations, are advertised (similar to DCE/RPC). The server stub automatically takes care of calling the correct operation within the interface.
RPC servers are built using the buildserver(1) command. We recommend using the
-s option to specify the service (interface) names at compilation time. The server can then be booted using the
-A option to get the services automatically advertised. This approach is used in the sample application, as shown in
Appendix , A Sample Application
The buildserver(1) command automatically links in the Oracle Tuxedo libraries. However, the RPC run time must be linked in explicitly. This is done by specifying the
-f -ltrpc option after any application files on the
buildserver line. Normally, the output of the
tidl(1) command is a server stub object file. This can be passed directly to the
buildserver command. Note that the server stub and the application source, object, and library files implementing the operations should be specified ahead of the run-time library, also using the
-f option. See the makefile
rpcsimp.mk, in
Appendix , A Sample Application for an example.
A native RPC client is built using the buildclient(1) command. This command automatically links in the Oracle Tuxedo libraries. However, the RPC run time must be linked in explicitly. This is done by specifying the
-f -ltrpc option after any application files on the
buildclient command line. Generally, the output of the
tidl(1) command is a client stub object file. This can be passed directly to the
buildclient command. Note that the client stub and the application source, object, and library files executing the remote procedure calls should be specified ahead of the run-time library, also using the
-f option. For an example, see the makefile
rpcsimp.mk in
Appendix , A Sample Application
Compilation of the client stub for Windows requires the -D_TM_WIN definition as a compilation option. This ensures that the correct function prototypes for the TxRPC and Oracle Tuxedo ATMI run time functions are used. While the client stub source is the same, it must be compiled specially to handle the fact that the text and data segments for the DLL will be different from the code calling it. The header file and stub are automatically generated to allow for the declarations to be changed easily, using C preprocessor definitions. The definition
_TMF (for “far”) appears before all pointers in the header file and
_TMF is automatically defined as “
_far” if
_TM_WIN is defined.
blds_dce [
-o output_file] [
-i idl_options] [
-f firstfiles] [
-l lastfile] \
[
idl_file . . . ]
This command compiles the source files in such a way (with -DTMDCEGW defined) that memory allocation is always done using
rpc_ss_allocate(3c) and
rpc_ss_free(3c), as described in the Oracle Tuxedo C Function Reference. This ensures that memory is freed on return from the Oracle Tuxedo ATMI server. The use of -
DTMDCEGW also includes DCE header files instead of Oracle Tuxedo TxRPC header files.
Oracle Tuxedo TxRPC does not support binding handles. When sending an RPC from the requester’s client stub to the server stub within the gateway, the Oracle Tuxedo system handles all of the name resolution and choosing the server, doing load balancing between available servers. However, when going from the gateway to the DCE server, it is possible to use DCE binding. If this is done, it is recommended that two versions of the IDL file be used in the same directory or that two different directories be used to build the requester, and the gateway and server. The former approach of using two different filenames is shown in the example with the IDL file linked to a second name. In the initial IDL file, no binding handles or binding attributes are specified. With the second IDL file, which is used to generate the gateway and DCE server, there is an associated ACF file that specifies [explicit_handle] such that a binding handle is inserted as the first parameter of the operation. From the Oracle Tuxedo server stub in the gateway, a NULL handle will be generated (because handles aren’t supported). That means that somewhere between the Oracle Tuxedo ATMI server stub and the DCE client stub in the gateway, a valid binding handle must be generated.
<INTERF>_v<major>_<minor>_epv_t<INTERF>_v<major>_<minor>_s_epv
where <INTERF> is the interface name and
<major>_<minor> is the interface version. This variable is dereferenced when calling the server stub functions. The IDL compiler option,
-no_mepv, inhibits the definition and initialization of this variable, allowing the application to provide it in cases where there is a conflict or difference in function names and operation names. In the case where an application wants to provide explicit or implicit binding instead of automatic binding, the
-no_mepv option can be specified, and the application can provide a structure definition that points to functions taking the same parameters as the operations but different (or static) names. The functions can then create a valid binding handle that is passed, either explicitly or implicitly, to the DCE/RPC client stub functions (using the actual operation names).
The main() for the DCE server should be based on the code provided in
$TUXDIR/lib/dceserver.c. If you already have your own template for the
main() of a DCE server, there are a few things that may need to be added or modified.
First, tpinit(3c) should be called to join the ATMI application. If application security is configured, then additional information may be needed in the
TPINIT buffer such as the username and application password. Prior to exiting,
tpterm(3c) should be called to cleanly terminate participation in the ATMI application. If you look at
dceserver.c, you will see that by compiling it with -
DTCLIENT, code is included that calls
tpinit and
tpterm. The code that sets up the
TPINIT buffer must be modified appropriately for your application. To provide more information with respect to administration, it might be helpful to indicate that the client is a DCE client in either the user or client name (the example sets the client name to
DCECLIENT). This information shows up when printing client information from the administration interface.
bldc_dce [-o output_file] [-w] [-i
idl_options] [-f
firstfiles] \
[-l
lastfiles] [
idl_file . . . ]
Again, -DTMDCE must be used at compilation time, both for client and server stub files and for your application code. In this case the requester must be an Oracle Tuxedo ATMI client:
Note that dceserver.c should call
tpinit(3c) to join the application and
tpterm(3c) to leave the application, as was discussed earlier.
cc <DCE options> -DTMDCE=1 -c -I. -I$(
TUXDIR)/include \
-I/usr/include/dce simp_cstub.c
•
|
If the server makes an RPC call, then set_client_alloc_free() should be called to set the use of rpc_ss_allocate() and rpc_ss_free(), as described earlier. (For more information, refer to the Oracle Tuxedo C Function Reference.)
|
Assume that simp_cstub.o was generated by
tidl(1) and
dce_cstub.o was generated by
idl. The first example shows building the client without a DCE compiler shell; in this case, the DCE library (
-ldce), threads library (
-lpthreads), and re-entrant C library (
-lc_r) must be explicitly specified. The second example shows the use of a DCE compiler shell which transparently includes the necessary libraries. In some environments, the libraries included by
buildserver and
buildclient for networking and XDR will conflict with the libraries included by the DCE compiler shell (there may be re-entrant versions of these libraries). In this case, the
buildserver(1) and
buildclient(1) libraries may be modified using the
-d option. If a link problem occurs, trying using
-d “ “ to leave out the networking and XDR libraries, as shown in the example above. If the link still fails, try running the command without the
-d option and with the
-v option to determine the libraries that are used by default; then use the
-d option to specify a subset of the libraries if there is more than one. The correct combination of libraries is environment-dependent because the networking, XDR, and DCE libraries vary from one environment to another.