BEA Logo BEA Tuxedo Release 7.1

  Corporate Info  |  News  |  Solutions  |  Products  |  Partners  |  Services  |  Events  |  Download  |  How To Buy

 

   Tuxedo Doc Home   |   Programming   |   Topic List   |   Previous   |   Next   |   Contents

   Programming a BEA Tuxedo Application Using TxRPC

Interoperating with DCE/RPC

The BEA Tuxedo TxRPC compiler uses the same IDL interface as OSF/DCE but the generated stubs do not use the same protocol. Thus, a BEA Tuxedo TxRPC stub cannot directly communicate with a stub generated by the DCE IDL compiler.

However, it is possible to have the following interoperations between DCE/RPC and BEA Tuxedo TxRPC:

The following sections show possible interactions between BEA Tuxedo TxRPC and OSF/DCE. In each case, the originator of the request is called the requester. This term is used instead of "client" because the requester could, in fact, be a DCE or BEA Tuxedo service making a request of another service. The terms "client" and "server" refer to the client and server stubs generated by the IDL compilers (either DCE idl(1) or BEA Tuxedo tidl(1)); these terms are used for consistency with the DCE and TxRPC terminology. Finally, the term "application service" is used for the application code that implements the procedure that is being called remotely (it is generally transparent whether the invoking software is the server stub generated by DCE or BEA Tuxedo).

BEA Tuxedo Requester to DCE Service via BEA Tuxedo Gateway

BEA Tuxedo Requester to DCE Service via BEA Tuxedo Gateway

The first approach uses a "gateway" such that the BEA Tuxedo client stub invokes a BEA Tuxedo server stub, via TxRPC, that has a DCE client stub linked in (instead of the application services) that invokes the DCE services, via DCE RPC. The advantage to this approach is that it is not necessary to have DCE on the client platform. In fact, the set of machines running BEA Tuxedo and the set of machines running DCE could be disjoint except for one machine where all such gateways are running. This also provides a migration path with the ability to move services between BEA Tuxedo and DCE. A sample application that implements this approach is described in A DCE-Gateway Application.

In this configuration, the requester is built as a normal BEA Tuxedo client or server. Similarly, the server is built as a normal DCE server. The additional step is to build the gateway process which acts as a BEA Tuxedo server using a TxRPC server stub and a DCE client using a DCE/RPC client stub.

The process of running the two IDL compilers and linking the resultant files is simplified with the use of the blds_dce(1) command, which builds a BEA Tuxedo server with DCE linked in.

The usage for blds_dce is as follows.

blds_dce [-o output_file] [-i idl_options] [-f firstfiles] [-l lastfile] \
[idl_file . . . ]

The command takes as input one or more IDL files so that the gateway can handle one or more interfaces. For each one of these files, tidl is run to generate a server stub and idl is run to generate a client stub.

This command knows about various DCE environments and provides the necessary compilation flags and DCE libraries for compilation and linking. If you are developing in a new environment, it may be necessary to modify the command to add the options and libraries for your environment.

This command compiles the source files in such a way (with -DTMDCEGW defined) that memory allocation is always done using rpc_ss_allocate(3c) and rpc_ss_free(3c), as described in the BEA Tuxedo C Function Reference. This ensures that memory is freed on return from the BEA Tuxedo server. The use of -DTMDCEGW also includes DCE header files instead of BEA Tuxedo TxRPC header files.

The IDL output object files are compiled, optionally with specified application files (using the -f and -l options), to generate a BEA Tuxedo server using buildserver(1). The name of the executable server can be specified with the -o option.

When running this configuration, the DCE server would be started first in the background, then the BEA Tuxedo configuration including the DCE gateway would be booted, and then the requester would be run. Note that the DCE gateway is single-threaded so you will need to configure and boot as many gateway servers as you want concurrently executing services.

There are several optional things to consider when building this gateway.

Setting the DCE Login Context

First, as a DCE client, it is normal that the process runs as some DCE principal. There are two approaches to getting a login context. One approach is to "log in" to DCE. In some environments, this occurs simply by virtue of logging into the operating system. In many environments, it requires running dce_login. If the BEA Tuxedo server is booted on the local machine, then it is possible to run dce_login, then run tmboot(1) and the booted server will inherit the login context. If the server is to be booted on a remote machine which is done indirectly via tlisten(1), it is necessary to run dce_login before starting tlisten. In each of these cases, all servers booted in the session will be run by the same principal. The other drawback to this approach is that the credentials will eventually expire.

The other alternative is to have the process set up and maintain its own login context. The tpsvrinit(3c) function provided for the server can set up the context and then start a thread that will refresh the login context before it expires. Sample code to do this is provided in $TUXDIR/lib/dceserver.c; it must be compiled with the -DTPSVRINIT option to generate a simple tpsvrinit() function. (It can also be used as the main() for a DCE server, as described in the following section.) This code is described in further detail in A DCE-Gateway Application.

Using DCE Binding Handles

BEA Tuxedo TxRPC does not support binding handles. When sending an RPC from the requester's client stub to the server stub within the gateway, the BEA Tuxedo system handles all of the name resolution and choosing the server, doing load balancing between available servers. However, when going from the gateway to the DCE server, it is possible to use DCE binding. If this is done, it is recommended that two versions of the IDL file be used in the same directory or that two different directories be used to build the requester, and the gateway and server. The former approach of using two different file names is shown in the example with the IDL file linked to a second name. In the initial IDL file, no binding handles or binding attributes are specified. With the second IDL file, which is used to generate the gateway and DCE server, there is an associated ACF file that specifies [explicit_handle] such that a binding handle is inserted as the first parameter of the operation. From the BEA Tuxedo server stub in the gateway, a NULL handle will be generated (because handles aren't supported). That means that somewhere between the BEA Tuxedo server stub and the DCE client stub in the gateway, a valid binding handle must be generated.

This can be done by making use of the manager entry point vector. By default, the IDL compiler defines a structure with a function pointer prototype for each operation in the interface, and defines and initializes a structure variable with default function names based on the operation names. The structure is defined as

<INTERF>_v<major>_<minor>_epv_t<INTERF>_v<major>_<minor>_s_epv 

where <INTERF> is the interface name and <major>.<minor> is the interface version. This variable is dereferenced when calling the server stub functions. The IDL compiler option, -no_mepv, inhibits the definition and initialization of this variable, allowing the application to provide it in cases where there is a conflict or difference in function names and operation names. In the case where an application wants to provide explicit or implicit binding instead of automatic binding, the -no_mepv option can be specified, and the application can provide a structure definition that points to functions taking the same parameters as the operations but different (or static) names. The functions can then create a valid binding handle that is passed, either explicitly or implicitly, to the DCE/RPC client stub functions (using the actual operation names).

This is shown in the example in A DCE-Gateway Application. The file dcebind.c generates the binding handle, and the entry point vector and associated functions are shown in dceepv.c.

Note that to specify the -no_mepv option when using the blds_dce, the -i -no_mepv option must be specified so that the option is passed through to the IDL compiler. This is shown in the makefile, rpcsimp.mk, in A DCE-Gateway Application.

Authenticated RPC

Now that we have a login context and a handle, it is possible to use authenticated RPC calls. As part of setting up the binding handle, it is also possible to annotate the binding handle for authentication by calling rpc_binding_set_auth_info(), as described in the BEA Tuxedo C Function Reference. This is shown as part of generating the binding handle in dcebind.c in A DCE-Gateway Application. This sets up the authentication (and potentially encryption) between the gateway and the DCE server. If the requester is a BEA Tuxedo server, then it is guaranteed to be running as the BEA Tuxedo administrator. For more information about authentication for BEA Tuxedo clients, see Administering the BEA Tuxedo System.

Transactions

OSF/DCE does not support transactions. That means that if the gateway is running in a group with a resource manager and the RPC comes into the BEA Tuxedo client stub in transaction mode, the transaction will not carray to the DCE server. There is not much you can do to solve this; just be aware of it.

DCE Requester to BEA Tuxedo Service Using BEA Tuxedo Gateway

DCE Requester to BEA Tuxedo Service Using BEA Tuxedo Gateway

In the preceding figure, the DCE requester uses a DCE client stub to invoke a DCE service which calls the BEA Tuxedo client stub (instead of the application services), which invokes the BEA Tuxedo service (via TxRPC). Note that in this configuration, the client has complete control over the DCE binding and authentication. The fact that the application programmer builds the middle server means that the application also controls the binding of the DCE server to BEA Tuxedo service. This approach would be used in the case where the DCE requester does not want to directly link in and call the BEA Tuxedo system.

The main() for the DCE server should be based on the code provided in $TUXDIR/lib/dceserver.c. If you already have your own template for the main() of a DCE server, there are a few things that may need to be added or modified.

First, tpinit(3c) should be called to join the BEA Tuxedo application. If application security is configured, then additional information may be needed in the TPINIT buffer such as the user name and application password. Prior to exiting, tpterm(3c) should be called to cleanly terminate participation in the BEA Tuxedo application. If you look at dceserver.c, you will see that by compiling it with -DTCLIENT, code is included that calls tpinit and tpterm. The code that sets up the TPINIT buffer must be modified appropriately for your application. To provide more information with respect to administration, it might be helpful to indicate that the client is a DCE client in either the user or client name (the example sets the client name to DCECLIENT). This information shows up when printing client information from the administration interface.

Second, since the BEA Tuxedo system software is not thread-safe, the threading level passed to rpc_server_listen must be set to one. In the sample dceserver.c, the threading level is set to 1 if compiled with -DTCLIENT and to the default, rpc_c_listen_max_calls_default, otherwise. (For more information, refer to BEA Tuxedo C Function Reference.)

In this configuration, the requester is built as a normal DCE client or server. Similarly, the server is built as a normal BEA Tuxedo server. The additional step is to build the gateway process, which acts as a BEA Tuxedo client using a TxRPC client stub, and a DCE server, using a DCE/RPC server stub.

The process of running the two IDL compilers and linking the resultant files is simplified with the use of the bldc_dce(1) command which builds a BEA Tuxedo client with DCE linked in.

The usage for bldc_dce is as follows.

bldc_dce [-o output_file] [-w] [-i idl_options] [-f firstfiles] \
[-l lastfiles] [idl_file . . . ]

The command takes as input one or more IDL files so that the gateway can handle one or more interfaces. For each one of these files, tidl is run to generate a client stub and idl is run to generate a server stub.

This command knows about various DCE environments and provides the necessary compilation flags and DCE libraries. If you are developing in a new environment, it may be necessary to modify the command to add the options and libraries for your environment. The source is compiled in such a way (with -DTMDCEGW defined) that memory allocation is always done using rpc_ss_allocate and rpc_ss_free (described in BEA Tuxedo C Function Reference) to ensure that memory is freed on return. The use of -DTMDCEGW also includes DCE header files instead of BEA Tuxedo TxRPC header files.

The IDL output object files are compiled, optionally with specified application files (using the -f and -l options), to generate a BEA Tuxedo client using buildclient(1). Note that one of the files included should be the equivalent of the dceserver.o, compiled with the -DTCLIENT option.

The name of the executable client can be specified with the -o option.

When running this configuration, the BEA Tuxedo configuration must be booted before starting the DCE server so that it can join the BEA Tuxedo application before listening for DCE requests.

BEA Tuxedo Requester to DCE Service Using DCE-only

BEA Tuxedo Requester to DCE Service Using DCE-only

The approach assumes that the DCE environment is directly available to the client (this can be a restriction or disadvantage in some configurations). The client program has direct control over the DCE binding and authentication. Note that this is presumably a mixed environment in which the requester is either a BEA Tuxedo service that calls DCE services, or a BEA Tuxedo client (or server) that calls both BEA Tuxedo and DCE services.

When compiling BEA Tuxedo TxRPC code that will be used mixed with DCE code, the code must be compiled such that DCE header files are used instead of the TxRPC header files. This is done by defining -DTMDCE at compilation time, both for client and server stub files and for your application code. If you are generating object files from tidl(1), you must add the -cc_opt -DTMDCE option to the command line. The alternative is to generate c_source from the IDL compiler and pass this C source (not object files) to bldc_dce or blds_dce as in the following examples.

tidl -keep c_source -server none t.idl
idl -keep c_source -server none dce.idl
bldc_dce -o output_file -f client.c -f t_cstub.c -f dce_cstub.c

or

blds_dce -o output_file -s service -f server.c -f t_cstub.c -f dce_cstub.c

In this example, we are not building a gateway process so .idl files cannot be specified to the build commands. Also note that the blds_dce command cannot figure out the service name associated with the server so it must be supplied on the command line using the -s option.

DCE Requester to BEA Tuxedo Service Using BEA Tuxedo-only

DCE Requester to BEA Tuxedo Service Using BEA Tuxedo-only

In this final case, the DCE requester calls the BEA Tuxedo client stub directly.

Again, -DTMDCE must be used at compilation time, both for client and server stub files and for your application code. In this case the requester must be a BEA Tuxedo client.

tidl -keep c_source -client none t.idl
bldc_dce -o output_file -f -DTCLIENT -f dceserver.c -f t_cstub.c

Note that dceserver.c should call tpinit(3c) to join the application and tpterm(3c) to leave the application, as was discussed earlier.

Building Mixed DCE/RPC and BEA Tuxedo TxRPC Clients and Servers

This section summarizes the rules to follow if you are compiling a mixed client or server without using the bldc_dce(1) or blds_dce(1) commands: