Oracle Tuxedo on Exalogic Users Guide

     Previous  Next    Open TOC in new window    View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Oracle Tuxedo/Oracle Exalogic
Users Guide

This chapter contains the following topics:

 


Overview

This section contains the following topics.

About this Guide

This document introduces all Oracle Tuxedo optimizations for Exalogic. With this document, you can easily install, configure, and run Tuxedo on Exalogic.

About Oracle Exalogic

Oracle Exalogic is an Engineered Systems that integrates compute, networking and storage hardware with virtualization, operating system and management software. It provides breakthrough performance, reliability, availability, scalability and investment protection for the widest possible range of business application workloads.

About Oracle Tuxedo optimizations for Oracle Exalogic

From 11.1.1.3.0, Oracle Tuxedo provides many optimizations for Oracle Exalogic platforms; Table 1 lists the Exalogic supported features:

Table 1 Exalogic Supported Oracle Tuxedo Features
Feature Name
Oracle Tuxedo Version
Direct Cross Node Communication Leveraging RDMA
Oracle Tuxedo 11gR1 (11.1.1.3.0) or above
Direct Cross Domain Communication Leveraging RDMA
Oracle Tuxedo 12cR2 (12.1.3) or above
Self-tuning Lock Mechanism
Oracle Tuxedo 11gR1 (11.1.1.3.0) or above
Oracle Tuxedo SDP Support
Oracle Tuxedo 11gR1 (11.1.1.3.0) or above
Use of Shared Memory for Inter Process Communication
Oracle Tuxedo 12cR1 (12.1.1) or above
Read-Only Optimization for XA
Oracle Tuxedo 12cR1 (12.1.1) or above
Shared Applications Staging
Oracle Tuxedo 12cR1 (12.1.1) or above
Tightly Coupled Transaction Branches Crossing Domain
Oracle Tuxedo 12cR1 (12.1.1) or above
XA Affinity
Oracle Tuxedo 12cR2 (12.1.3) or above
Common XID
Oracle Tuxedo 12cR2 (12.1.3) or above
Single Group Multiple Branches (SGMB)
Oracle Tuxedo 12cR2 (12.1.3) or above
Fast Application Notification (FAN)
Oracle Tuxedo 12cR2 (12.1.3) or above

Note: From Oracle Tuxedo 12cR2 (12.1.3), all optimizations support both Exalogic Linux 64bit and SPARC 64 bit, except for "Direct Cross Node Communication Leveraging RDMA" and "Direct Cross Domain Communication Leveraging RDMA".
Note: For more information about these features, please see "Tuxedo Optimizations on Exalogic".

Tuxedo Optimizations on Exalogic

Direct Cross Node Communication Leveraging RDMA

This is a new feature in Tuxedo 11.1.1.3.0, which can significantly improve the performance of Tuxedo application under MP mode.

In previous releases, messages between local client and remote server must go through bridge. For example, first the message will be sent to local bridge through IPC queue, next the local bridge sends it to remote bridge through network, then the remote bridge sends the message to server's IPC queue, finally the server retrieves the message from its IPC queue, so the Bridge will become a bottleneck under high concurrency. By utilizing the RDMA capabilities of Infiniband, Tuxedo 11.1.1.3.0 introduced a new feature of "Direct Cross Node Communication Leveraging RDMA", it provide the ability for local client to transfer message to remote server directly.

For more information about configuration, see Oracle Tuxedo Configuration.

Direct Cross Domain Communication Leveraging RDMA

In previous releases, messages between local domain and remote domain must go through domain gateways (GWTDOMAIN). For example, first the message will be sent to local GWTDOMAIN through IPC queue, next the local GWTDOMAIN sends it to remote GWTDOMAIN through network, then the remote GWTDOMAIN sends the message to server's IPC queue, finally the server retrieves the message from its IPC queue, so the domain gateways will become a bottleneck under high concurrency. In this release, if Direct Cross Domain Communication Leveraging RDMA is enabled in the TUXCONFIG file, the local client and remote server can skip domain gateways, and transfer message directly.

For more information about configuration, see Oracle Tuxedo Configuration.

Self-Tuning Lock Mechanism

This is a new feature in Tuxedo 11.1.1.3.0, which can adjust the value of SPINCOUNT dynamically for the best use of CPU cycle.

The Tuxedo bulletin board (BB) is a memory segment in which all the application configuration and dynamic processing information is held at run time. For some Tuxedo system operations (such as service name lookups and transactions), the BB must be locked for exclusive access: that is, it must be accessible by only one process. If a process or thread finds that the BB is locked by another process or thread, it retries, or spins on the lock for SPINCOUNT number of times ((user level method via spin) before giving up and going to sleep on a waiting queue (system level method via system semaphore). Because sleeping is a costly operation, it is efficient to do some amount of spinning before sleeping.

Because the value of the SPINCOUNT parameter is application- and system-dependent, the administrator has to tune the SPINCOUT to be a proper value manually by observing the application throughput under different values of SPINCOUNT.

Self-Tuning Lock Mechanism takes the job of tuning automatically. It is designed to figure out a proper value of SPINCOUNT so that most requests to lock BB are completed by spinning instead of sleeping on a waiting queue.

The algorithm of Self-Tuning Lock Mechanism is improved in Tuxedo 12cR2 to help the tuning more accurate than before.

For more information about configuration, see Oracle Tuxedo Configuration.

Oracle Tuxedo SDP Support

One of the benefits of using InfiniBand based network hardware is the ability to utilize the socket direct protocol, or SDP. This protocol allows applications to communicate with each other via the normal socket interface but bypass the network processing associated with TCP/IP which includes things like ordering, fragmentation, timeouts, retries, and the like because the InfiniBand hardware takes care of those concerns. As well SDP can support zero copy transfers as the InfiniBand hardware is capable of directly transferring buffers from the caller's address space.

By utilizing SDP, Tuxedo applications can reduce the amount of CPU consumed for networking operations as well as increase the overall throughput of network operations. SDP can be used on all Tuxedo network connections including BRIDGE to BRIDGE communication, the domain gateway GWTDOMAIN for communication with other Tuxedo domains, for workstation and Jolt clients, and as well for communication with WebLogic Server via the WebLogic Tuxedo Connector.

For more information about configuration, see Oracle Tuxedo Configuration.

Use of Shared Memory for Inter Process Communication

Oracle Tuxedo 12c significantly enhances performance of Tuxedo applications on Exalogic with use of shared memory queues instead of IPC Message Queues for inter process communication on the same Tuxedo node. With the use of shared memory queues, the sender and receiver processes can exchange pre-allocated messages in shared memory, thus eliminating the need to copy messages several times before message reaches its intended target and resulting in much better throughput and lower latency.

For more information about configuration, see Oracle Tuxedo Configuration.

Read-Only Optimization for XA

This is a new feature in Tuxedo 11.1.1.3.0, which takes advantage of the read-only optimization of resource manager for XA. Given two phase commit scenario, the prepare requests are synchronized to the participated groups except the reserved one. If all transaction branches in those groups are read-only, Tuxedo will do one-phase commit on the reserved one directly. It means one prepare request (to the reserved one) is saved and writing TLOG is ignored.

Transactions either within or across domains are supported, including global transaction across Tuxedo domain and WLS via WTC (in WLS 12.1.1 - Contact Oracle Support for a patch, or higher release of WLS).

For more information about configuration, see Oracle Tuxedo Configuration.

Shared Applications Staging

With Oracle Tuxedo 12c, one can share application directory (APPDIR) among many compute nodes of the storage appliance on an Exalogic system, making it easier to manage application deployment.

For more information about configuration, see Oracle Tuxedo Configuration.

Tightly Coupled Transaction Branches Crossing Domain

This is a new feature in Tuxedo 12.1.1.

In Tuxedo 11.1.1.3.0 or earlier, the transaction crossing domain is loosely coupled even if the branches of the transaction running on same database due to different global transaction identifiers (GTRIDs) are used in different domains. Since Tuxedo 12.1.1, common GTRID has been introduced in default to make branches within a global transaction crossing domains using common GTRID. The branches would be tightly coupled if they are running on same database (if the database allows).

XA Affinity

XA affinity provides the ability to route all Oracle database requests within one global transaction to the same Oracle RAC instance when possible; no matter if the requests come from an Oracle Tuxedo application server or Oracle WebLogic Server. This feature can reduce the cost of redirecting database requests to a new Oracle RAC instance, and thus can improve overall application performance.

For more information about configuration, see Oracle Tuxedo Configuration.

Common XID

In previous releases, for global transactions, each participating group has its own transaction branch, and a distinguished transaction branch identifier (XID) identifies each branch. If a global transaction involves multiple groups, Tuxedo adopts two-phase commit on each branch, taking the first participating group as the coordinator.

With the common XID (transaction branch identifier) feature in this release, Tuxedo shares the XID of the coordinator group with all other groups within the same global transaction. This is as opposed to each group having its own XID and thus requiring two-phase commit in earlier releases if multiple groups are participating.

Common XID eliminates the need to XA commit operations for groups that connect to the same Oracle RAC instance through the same service by using the coordinator branch directly.

In the cases where all groups in a global transaction use the coordinator branch directly, one-phase commit protocol is used instead of two-phase commit protocol, and thus avoid writing TLOG.

For more information about configuration, see Oracle Tuxedo Configuration.

Single Group Multiple Branches (SGMB)

In previous releases, servers in the same participated group use the same transaction branch in a global transaction; if these serves connect to different instances on the same RAC, the transaction branch may fail and an XA error, XAER_AFFINITY, will be reported, meaning one branch cannot go through different instances. For this reason, Tuxedo groups can only use singleton RAC services. A DTP service (if the DTP option, -x in srvctl, is specified) or a service offered by only one instance could be a singleton RAC service.

In this release, this feature eliminates the need to use singleton RAC service when multiple servers in a server group participate in the same global transaction. If servers in the same server group and same global transaction happen to connect to different RAC instances, a different transaction branch is used. This enables such applications to perform load balancing across available RAC instancesWith .

For more information about configuration, see Oracle Tuxedo Configuration.

Note: The transaction still fails if more than 16 instances are involved in a single group.

FAN Integration

FAN (Fast Application Notifications) are events published by Oracle RAC to indicate configuration changes. A system server, TMFAN, is introduced to monitor FAN events and automatically reconfigure Tuxedo server connection to the appropriate Oracle RAC instance for planned DOWN events, UP events, LBA (Load Balancing Advisor) notifications, etc.

For more information about configuration, see Oracle Tuxedo Configuration.

 


Oracle Tuxedo Configuration

This section introduces the basic Oracle Tuxedo feature configuration on Exalogic. For more information, see the Oracle Tuxedo 12c Release 2 (12.1.3) Release Notes and Setting Up an Oracle Tuxedo Application.

Direct Cross Node Communication Leveraging RDMA

The configuration for "Direct Cross Node Communication Leveraging RDMA" includes.

UBBCONFIG File

Direct Cross Node Communication Leveraging RDMA is only supported under MP mode. To enable this feature, you must specify EECS in OPTIONS, otherwise the message goes through the Bridge.

There is one attribute for Direct Cross Node Communication Leveraging RDMA in the *RESOURCE section.

EXALOGIC_SHARED_PATH

The directory name of Oracle Tuxedo file transfer. The function of EXALOGIC_SHARED_PATH here is the same as that of environment variable EXALOGIC_SHARED_PATH; however, at Tuxedo runtime, such environment variable has higher priority. EXALOGIC_SHARED_PATH must be a shared directory with read/write permissions for all Tuxedo nodes and can be specified in *RESOURCE section only if RDMA is enabled. There are five attributes for Direct Cross Node Communication Leveraging RDMA in the *MACHINES section.

RDMADAEMONIP

The IP address where the Msgq_daemon is bound. It must be configured, and must be an IPoIB address (not an Ethernet based IP address). You should configure one Msgq_daemon for one logic machine.

RDMADAEMONPORT

The port number where Msgq_daemon listens on. It must be configured.

RDMAQSIZE

The EMSQ queue size. The default value is 65536 bytes if not defined in the UBBCONFIG file.

RDMAQENTRIES

The EMSQ queue entry number, that is the maximum number of messages allowed in this queue. The default value is 64 if not defined in the UBBCONFIG file.

EXALOGIC_MSGQ_CACHE_SIZE

The entry number for Oracle Tuxedo EMSQ cache. The function of EXALOGIC_MSGQ_CACHE_SIZE here is the same as that of the environment variable EXALOGIC_MSGQ_CACHE_SIZE; however, the environment variable has the higher priority. The value must be between 32 and 2048 inclusive. EXALOGIC_MSGQ_CACHE_SIZE can be specified in *MACHINES only when RDMA is enabled. The default value is 32 if it is not defined in UBBCONFIG. In some scenarios, Tuxedo performance can be improved when increasing this number. For more details, please see Setting EXALOGIC_MSGQ_CACHE_SIZE.

After enable RDMA option in the *RESOURCES section. Attribute "TYPE" of *MACHINES section cannot be set, since by default, any machines in MP mode should be Exalogic machine (with the same type) to support RDMA feature.

You can also get/change the configuration via TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

Listing 1 shows an example UBBCONFIG File with Direct Cross Node Communication Leveraging RDMA enabled.

Listing 1 UBBCONFIG File Example with Direct Cross Node Communication Leveraging RDMA Enabled
*RESOURCES
IPCKEY		87654
MASTER		site1,site2
MAXACCESSERS		40
MAXSERVERS		40
MAXSERVICES		40
MODEL		MP
OPTIONS		LAN,EECS
LDBAL		Y
*MACHINES
slce04cn01		LMID=site1
		APPDIR="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp"
              TUXCONFIG="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp/tuxconfig"
		TUXDIR="/home/oracle/tuxedo12.1.1.0"
		UID=601
		GID=601
      			RDMADAEMONIP=192.168.10.1
      			RDMADAEMONPORT=9800
      		RDMAQSIZE=65536
      		RDMAQENTRIES=64
slce04cn02		LMID=site2
		APPDIR="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp/slave"
		       TUXCONFIG="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp/slave/tuxconfig"
		TUXDIR="/home/oracle/tuxedo12.1.1.0"
		UID=601
		GID=601
      		RDMADAEMONIP=192.168.10.2
      		RDMADAEMONPORT=9800
      		RDMAQSIZE=65536
      		RDMAQENTRIES=64
*GROUPS
GROUP1
	LMID=site1		GRPNO=1		OPENINFO=NONE
GROUP2
	LMID=site2		GRPNO=2		OPENINFO=NONE
*NETWORK
site1	NADDR="//slce04cn01:5432"
	NLSADDR="//slce04cn01:5442"
site1	NADDR="//slce04cn02:5432"
	NLSADDR="//slce04cn02:5442"
*SERVERS
DEFAULT:
		CLOPT="-A"
simpserv		SRVGRP=GROUP2 SRVID=3
*SERVICES
TOUPPER

Setting Shell Limit for Memory Lock

The shared memory used by Msgq_Daemon will be locked into physical memory to avoid being paged to swap area, so it is necessary to set a proper value to memlock in /etc/securitylimits.conf.

Please use the following formula to get the minimum value for memlock:

[Msgq_daemon shared memory size]*2 + MAXACCESSERS *14 000 kb

Msgq_daemon shared memory size: The size of shared memory allocated by Msgq_daemon. For more information, see "Calculating Shared Memory Size for Msgq_daemon".

MAXACCESSERS: An attribute in the UBBCONFIG file.

For example:

Msgq_daemon shared memory size: 200*1024 kb

MAXACCESSERS: 100

200*1024*2 + 100 * 14000 = 1809600

Specify it in /etc/securitylimits.conf like that:

* hard memlock 1809600

* soft memlock 1809600

Setting Default Directory Name for File Transfer

Before starting Oracle Tuxedo, ensure that there is a shared directory for all Exalogic nodes when Direct Cross Node Communication Leveraging RDMA is enabled. Make sure that access permissions are properly set.

The default name is /u01/common/patches/tuxtmpfile, you can also set your own directory using the EXALOGIC_SHARED_PATH environment variable. It is used for Oracle Tuxedo file transfer. When the EMSQ is full, or the message size exceeds the queue size, Oracle Tuxedo puts this message into a temporary file under the /u01/common/patches/tuxtmpfile directory, and sends notification directly to the remote process queue. The remote process can then get the file as long as it receives the notification.

Direct Cross Domain Communication Leveraging RDMA

Using Direct Cross Domain Communication Leveraging RDMA requires UBBCONFIG file configuration.

Note: Direct Cross Domain Communication Leveraging RDMA requires you to enable Direct Cross Note Communication Leveraging RDMA at first.

UBBCONFIG File

To enable this feature, in RESOURCES section of UBBCONFIG, you must specify BYPASSDOM_ID, BYPASSDOM_SEQ_NUM, and BYPASSDOM_SHARED_DIR parameters, as well as EECS flag of OPTIONS parameter.

There is an optional attribute MAXDOMAINS, which specifies the maximum number of domains within one domain group. The default is 32.

You can also get or change the configuration via TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

Listing 2 shows a UBBCONFIG file example of enabling Direct Cross Domain Communication Leveraging RDMA.

Listing 2 UBBCONFIG File Example of Enabling Direct Cross Domain Communication Leveraging RDMA
*RESOURCES
IPCKEY		87654
MASTER		site1 
MAXACCESSERS		40
MAXSERVERS		40
MAXSERVICES		40 
MODEL		SHM
OPTIONS		EECS
LDBAL		Y
BYPASSDOM_ID bddomgrp1 
BYPASSDOM_SEQ_NUM 0
BYPASSDOM_SHARED_DIR "/nfs/bypassdom/bddomgrp1/shareddir" 
MAXDOMAINS 16
 
*MACHINES
slce04cn01		LMID=site1
    APPDIR="/home/oracle/tuxedo12.1.3.0/samples/atmi/simpapp"
    TUXCONFIG="/home/oracle/tuxedo12.1.3.0/samples/atmi/simpapp/tuxconfig" 
    TUXDIR="/home/oracle/tuxedo12.1.3.0"
    UID=601
    GID=601 
    RDMADAEMONIP="192.168.10.1" 
    RDMADAEMONPORT=9800 
    RDMAQSIZE=65536 
    RDMAQENTRIES=64
*GROUPS 
GROUP1		LMID=site1 GRPNO=1 OPENINFO=NONE
*SERVERS 
DEFAULT:
CLOPT="-A"
simpserv		SRVGRP=GROUP1 SRVID=3
*SERVICES 
 TOUPPER

Self-Tuning Lock Mechanism

As long as the option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, this feature is enabled by default. A new option, NO_SPINTUNING, is introduced to disable this feature explicitly.

Two other optional attributes are supported in *MACHINES section:

SPINTUNING_FACTOR

The option SPINTUNING_FACTOR controls the tuning target. The default value is 100 which is good enough in most scenarios. It can be changed from 1 to 10000 if necessary. A value of 100 indicates that SPINCOUNT will stop tuning as long as less than 1 in 100 attempts to lock result in system level method to get the BB lock and there is sufficient idle CPU. If the number of lock attempts resulting in system level method is higher than 1 and there is sufficient idle CPU time, SPINCOUNT will be increased.

SPINTUNING_MINIDLECPU: Specifies the CPU idle time.

The negative impact of the user level method is the extra cost of the CPU. Too many reties of user level method will cost many CPU time. This option is used to limit the CPU used by the user level method. The Self-Tuning Lock Mechanism will not increase the SPINCOUNT when the limitation of SPINTUNING_MINIDLECPU is reached even if the tuning target is not met. On the contrary, the SPINCOUNT will be decreased when the limitation of SPINTUNING_MINIDLECPU is broken no matter the tuning target is met or not. For example, given the value of 20, the Self-Tuning Lock Mechanism will control the idle CPU time not less than %20 during the adjustment. The default value is 20.
Note: If not specified, the default values for these attributes are used.
Note: The Self-Tuning Lock Mechanism may adjust the SPINCOUNT at each scan unit but may need to adjust by several times to achieve the target.
Note: In the Oracle Tuxedo 11.1.1.3.0, SPINCOUNT in *MACHINES cannot be set whenever the feature enabled. On the other hand, the feature can't be enabled too when SPINCOUNT is used.

For more information, see UBBCONFIG(5) and UBBCONFIG(5) Additional Information, Example 2 Self-Tuning Lock Mechanism Configuration, in File Formats, Data Descriptions, MIBs, and System Processes Reference.

You can also set the configuration via TM_MIB. For more information, see TM_MIB(5) in File Formats, Data Descriptions, MIBs, and System Processes Reference.

Listing 3 shows a UBBCONFIG file example of enabling Self-Tuning Lock Mechanism.

Listing 3 UBBCONFIG File Example of Enabling Self-Tuning Lock Mechanism
*RESOURCES
OPTIONS         EECS
...

Listing 4 shows a UBBCONFIG file example of disabling Self-Tuning Lock Mechanism.

Listing 4 UBBCONFIG File Example of Disabling Self-Tuning Lock Mechanism
*RESOURCES
OPTIONS         EECS,NO_SPINTUNING
...

Oracle Tuxedo SDP Support

To enable Oracle Tuxedo SDP Support, you must specify EECS for OPTIONS in *RESOURCES section, and set the relevant configuration in UBBCONFIG file or DMCONFIG file.

You can also get or change the configuration via TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

This section covers the following configurations:

MP

According to the requirements, MP should work inside IB clusters, i.e., both master and slave machines are inside IB cluster, so only consider SDP and IPoIB are used inside IB cluster, and in bootstrap phase, tmboot, tlisten, bsbridge and bridge are using Socket API to communicate with each other.

GWTDOMAIN

If the node running GWTDOMAIN has multiple network interfaces (multi-homed) with multiple IP addresses, it is better to use explicit IP address when configuring GWTDOMAIN in DMCONFIG file instead of host name. Typically, every Exalogic node has at least two types of network interface, i.e., IB interface and Ethernet interface, in order to facilitate to demonstrate how to configure GWTDOMAIN, just presume IB interface is bound to IP address IB_IP, and Ethernet interface with IP address ETH_IP.

Functionally, GWTDOMAIN acts as both server and client in role, as server, it will listen on a configured IP address and port number in the DMCONFIG file to accept connection request from other GWTDOMAIN, as client, it will initiate connection request to other GWTDOMAIN by policy configured in the DMCONFIG file.

WSL

/WS client

JSL

Configure JSL listen on SDP

Prefixed "sdp:" to the network address, and the network address must be an IPoIB address as shown in Listing 13.

Listing 13 JSL Listening on SDP UBBCONFIG File Configuration Example
*SERVERS
DEFAULT:        CLOPT="-A"
JSL             SRVGRP=WSGRP SRVID=1001
                CLOPT="-A -- -nsdp: //IB_IP: 11101 -m1 -M10 -x1"

WTC

To enable SDP connection between WTC and Oracle Tuxedo, do the following steps:

  1. Specify the NWAddr of the WTC service Local/Remote Access Points as follows:
  2. sdp://IB_IP:port

    It is the same as the GWTDOMAIN NWADDR configuration in the DMCONFIG file.

  3. Add additional Java Option “-Djava.net.preferIPv4Stack=true” to the java command line to start up WLS server.
Notes: If the WTC access point has SSL enabled, after configuring for the SDP, the SSL configuration is ignored.
Note: Only Weblogic Server 12c (12.1.1) and higher can connect to Oracle Tuxedo via SDP. For more information, see Enable IPv4 for SDP transport, NWAddr attribute for WTC local Tuxedo Domain configuration, and NWAddr attribute for WTC remote Tuxedo Domain configuration.

Use of Shared Memory for Inter Process Communication

As long as the option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, this feature is enabled by default. A new option, NO_SHMQ, is introduced to disable this feature explicitly.

Another optional attribute is provided in *RESOURCES section:

SHMQMAXMEM numeric_value

Specifies the maximum shared memory size (Megabyte) used for message buffers. To use SHMQMAXMEM, the option SHMQ must be specified. The range of numeric_value is from 1 to 96000 inclusive. If SHMQ is specified while either SHMQMAXMEM is not configured or its value is too small, a recommended minimum value will be used, which is good enough for almost all of the scenarios. Run tmloadcf -c to get recommended minimum value. For more information, refer to tmloadcf(1).

Read-only Optimization for XA

As long as the option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, this feature is enabled by default. A new option, NO_RDONLY1PC, is introduced to disable this feature explicitly.

Listing 15 Configuration Example
  *RESOURCES
OPTIONS		LAN,EECS
Listing 16 Configuration Example2
  *RESOURCES
OPTIONS		LAN,EECS,NO_RDONLY1PC

You can also get/change the configuration via TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

Database

The Oracle Tuxedo system uses the X/Open XA interface for communicating with the various resource managers. The XA Standard is widely supported in all the major database vendor products.

You can use SDP (Sockets Direct Protocol) for Oracle Database invocations. There is no special requirement for Oracle Tuxedo application.

Please configure the database to support InfiniBand, as described in Configuring SDP Protocol Support for Infiniband Network Communication to the Database Server in the Oracle Database Net Services Administrators Guide.

Note: The following SDP parameters affect performance when sending large data blocks to the database.

For example, you can set the two parameters in “/etc/modprobe.conf” on the server node as follows:

options ib_sdp sdp_zcopy_thresh=0 recv_poll=0

Choosing APPDIR

You can deploy your Oracle Tuxedo application to a shared directory on Exalogic in MP environment (named Shared Applications Staging) with the requirements that both EECS option and MP mode are set and enabled. Before booting the Oracle Tuxedo application, ensure the following parameters are set correctly in the UBBCONFIG file:

TUXCONFIG

The TUXCONFIG must be different for each node.

TLOGDEVICE

The TLOGDEVICE must be different for each node.

ULOGPFX

Set different path for ULOGPFX if you want to have a separate ULOG.

Access Permission for shared APPDIR

Users from different Exalogic nodes must have the same uid and gid of OS.

Besides above, each node had better use distinctive TMIFRSVR repository_file, standard output/error file, AUDITLOG file, and ALOGPFX to have a clear logging system. All applications should be set distinctive names to use the Shared Applications Staging feature better.

Listing 17 shows a UBBCONFIG file shared APPDIR example.

Listing 17 UBBCONFIG File Shared APPDIR
...
*MACHINES
slce04cn01 LMID=site1
          APPDIR="/home/oracle/tuxapp"
          TUXCONFIG="/home/oracle/tuxapp/tuxconfig_cn01"
          TUXDIR="/home/oracle/tuxedo11gR1"
          TLOGDEVICE=/home/oracle/tuxapp/TLOG1
ULOGPFX="/ home/oracle/tuxapp /ULOG_cn01"
          RDMADAEMONIP="192.168.10.1"
          RDMADAEMONPORT=9800
          RDMAQSIZE=1048576
          RDMAQENTRIES=1024
slce04cn02 LMID=site2
          APPDIR=" home/oracle/tuxapp"
          TUXCONFIG=" home/oracle/tuxapp/tuxconfig_cn02"
          TUXDIR="/home/oracle/tuxedo11gR1"
          TLOGDEVICE=/home/oracle/tuxapp/TLOG2
ULOGPFX="/home/oracle/tuxapp /ULOG_cn02"
          RDMADAEMONIP="192.168.10.2"
          RDMADAEMONPORT=9800
          RDMAQSIZE=1048576
          RDMAQENTRIES=1024

If the SECUTIRY is set in UBBCONFIG file while neither EECS option nor MP mode is set, the Shared APPDIR cannot be used. In that case, you must use a different APPDIR and have a copy for each node.

XA Affinity

As long as option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, XA affinity feature is enabled by default. A new option, NO_XAAFFINITY, is introduced to RMOPTIONS of UBBCONFIG *RESOURCES section to disable local XA affinity explicitly.

RMOPTIONS {[...|NO_XAAFFINITY],*}

Listing 18 Configuration Example of Enabling XA Affinity by Default
*RESOURCES
OPTIONS		EECS
Listing 19 Configuration Example of Disabling XA Affinity Explicitly
*RESOURCES
OPTIONS		EECS
RMOPTIONS		NO_XAAFFINITY

You can also set this flag when Tuxedo application is inactive through T_DOMAIN class in TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

Common XID

As long as the option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, common XID feature is enabled by default. A new option, NO_COMMONXID, is introduced to RMOPTIONS of UBBCONFIG *RESOURCES section to disable common XID explicitly.

RMOPTIONS {[...|NO_COMMONXID],*}

Listing 20 Configuration Example of Enabling Common XID by Default
*RESOURCES 
OPTIONS		EECS
Listing 21 Configuration Example of Disabling Common XID Explicitly
*RESOURCES 
OPTIONS		EECS
RMOPTIONS		NO_COMMONXID

You can also set this flag when Tuxedo application is inactive through T_DOMAIN class in TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

Single Group Multiple Branches (SGMB)

As long as the option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, this feature is enabled by default. A new option, SINGLETON, is introduced to RMOPTIONS of UBBCONFIG *RESOURCES section to disable this feature explicitly.

RMOPTIONS {[...|SINGLETON],*}

Note: This option indicates all RAC services used in the domain are singleton.
Listing 22 Configuration Example of Enabling SGMB by Default
*RESOURCES 
OPTIONS		EECS
Listing 23 Configuration Example of Disabling SGMB Explicitly
*RESOURCES 
OPTIONS		EECS
RMOPTIONS		SINGLETON

You can also set this flag when Tuxedo application is inactive through T_DOMAIN class in TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

FAN Integration

As long as option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, FAN integration is enabled by default. A new option, NO_FAN, is introduced to RMOPTIONS of UBBCONFIG *RESOURCES section to disable FAN integration explicitly.

RMOPTIONS {[...|NO_FAN],*}

Listing 24 Configuration Example of Enabling FAN by Default
*RESOURCES 
OPTIONS		EECS
Listing 25 Configuration Example of Disabling FAN Explicitly
*RESOURCES 
OPTIONS		EECS
RMOPTIONS		NO_FAN

You can also set this flag when Tuxedo application is inactive through T_DOMAIN class in TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

To monitor FAN events, specify Tuxedo system server TMFAN in SERVERS section. For more information about TMFAN, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

To support Oracle TAF (Transparent Application Failover) for Tuxedo XA server, threads=t must be included in OPENINFO.

 


Best Practices to Optimize Performance

This section contains the following topics:

Direct Cross Node Communication Leveraging RDMA

Scenarios recommended

The feature provides the ability for client to directly access remote server, it eliminates the bottleneck on BRIDGE. When Tuxedo is under high concurrent remote access in MP mode, the throughput will have significant improvement if this feature is enabled in UBBCONFIG.

Note: The following scenario is not recommended for this feature:
Note: The client connects with the remote server through BRIDGE, and work on it for a relatively short duration. For example, tpinit() followed several tpcall(), then tpterm(). The overhead of creating/opening/closing for RDMA connection is much higher than it on Unix IPC queue. So it cannot have obvious performance improvement under this scenario.

Setting EXALOGIC_MSGQ_CACHE_SIZE

Each Oracle Tuxedo thread has an EMSQ runtime cache; the default entry number is 32. You can change it between 32 and 2048 using the environment EXALOGIC_MSGQ_CACHE_SIZE variable before the Oracle Tuxedo application starts, or setting it in UBBCONFIG. In some scenarios, increasing the number can improve Oracle Tuxedo performance, for example:

Calculating Shared Memory Size for Msgq_daemon

Using tmloadcf

To get the recommended value, please run tmloadcf -c ubb as shown in Listing 26.

Listing 26 UBBCONFIG File *MACHINES Section
*MACHINES
ex03	
LMID=site1
		...
RDMADAEMONIP="192.168.10.1"
RDMADAEMONPORT=9800
RDMAQSIZE=100000
RDMAQENTRIES=100
MAXACCESSERS=100
...
ex03_1	LMID=site2
...
RDMADAEMONIP="192.168.10.2"
RDMADAEMONPORT=9800
RDMAQENTRIES=1000
MAXACCESSERS=200
...
ex04	LMID=site3
...
RDMADAEMONIP="192.168.10.3"
RDMADAEMONPORT=9800
RDMAQSIZE=100000
RDMAQENTRIES=100
MAXACCESSERS=200
MAXSERVERS=100
...
ex04_1	LMID=site4
		...
	RDMADAEMONIP="192.168.10.4"
      	RDMADAEMONPORT=9800
      	RDMAQSIZE=1000000
      	RDMAQENTRIES=1000
MAXACCESSERS=100
...

Run command tmloadcf -c ubb, get the output shown in Listing 27.

Listing 27 tmloadcf -c ubb Output Example
Ipc sizing (minimum /T values only) ...
                  Fixed Minimums Per Node
SHMMIN: 1
SHMALL: 1
SEMMAP: SEMMNI
                  Variable Minimums Per Node
SEMUME,           A                             SHMMAX
                        SEMMNU,           *                               *
Node       SEMMNS  SEMMSL  SEMMSL  SEMMNI  MSGMNI  MSGMAP  SHMSEG  RCDMSZ
-----       -----  -----    ----   -------  -----   ------  ------  ------
ex03        126      15     120   A + 2      26      52   1178K    220M
ex04        221      28     220   A + 1      26      52   1340K    340M
ex04_1      121      15     120   A + 1      26      52   1178K   1300M
ex03_1      221      28     220   A + 1      25      50   1340K   2500M

RCDMSZ increases linearly when any of following items configured in UBB increases:

Adjusting Shared memory size:

After getting the RCDMSZ by tmloadcf, you can adjust the actual size according to the following runtime factors:

Note: For detailed information about configuration, please see Direct Cross Node Communication Leveraging RDMA in Oracle Tuxedo Configuration.

Self-Tuning Lock Mechanism

Scenarios recommended

A proper SPINCOUNT indicates a server can hold the BB lock via user level method at most time. It can significantly improve the performance in the scenarios where BB lock conflict is heavy. The typical scenario is the transactional application using Tuxedo XA mechanism. So it is recommended to enable this feature on the Oracle Exalogic by default in a Tuxedo application unless the CPU is not enough.

Setting the Number of Lock Spins

A process or thread locks the bulletin board through user level method or system level method. Because system level method is a costly operation, it is efficient to set a proper number of lock spins so that most lock attempts are achieved through user level method.

A process on a uniprocessor system should not spin. A SPINCOUNT value of 1 is appropriate for uniprocessors. On multiprocessors, the value of the SPINCOUNT parameter is application- and system-dependent. Self-Tuning Lock Mechanism can figure out the proper SPINCOUNT automatically.

For detailed information about configuration, please see Oracle Tuxedo Configuration.

Use of Shared Memory for Inter Process Communication

Scenarios recommended

SHMQ helps you to gain higher performance in the native Tuxedo application by reducing unnecessary message copy. It can be considered to enable this feature when one or more cases are met in the following list:

Adjust SHMQMAXMEM

The default value is good enough for almost all scenarios. But, you need adjust the value of SHMQMAXMEM in UBBCONFIG if the message size is great than 32 Kbytes, the detail is as follow:

Memory Usage

Given a specific shared memory used by the SHMQ, the Tuxedo would divide it into several parts for different sized buffers. In general, the bigger the buffer size is, the less the total entries for this kind of buffer are. If some sized buffer is too much, the Tuxedo will convert to use local memory although the whole shared memory for SHMQ is not full.

In this release, there are two new MIB fields, TA_SHMQSTAT and TA_MSG_SHMQNUM, which are used to get the detailed information about shared memory usage. For more details about TA_SHMQSTAT and TA_MSG_SHMQNUM, please see TM_MIB.

Programming with SHMQ

It is a new flag of TPNOCOPY in tpcall() for using SHMQ message.

A typical Tuxedo user case of zero-copy messaging:

  1. Client gets request SHMMSG buffer by tpalloc()
  2. Client sends the request to server's request SHMQ by tpcall(), and waits for reply
  3. Server receives the request from its request SHMQ, processes the request
  4. Server use the same SHMMSG buffer for reply
  5. Server sends the reply to client's reply SHMQ by tpreturn()
  6. Client receives the reply from its reply SHMQ

Zero-copy messaging is an ideal circumstance, with the pre-condition that sender and receiver cannot access the shared buffer at the same time. In the real world, to guarantee safe memory access, sender need do one copy and send the copy instead of original SHMMSG. However, to gain extreme performance, new flag TPNOCOPY is provided for tpcall() to avoid the copy cost. If application chooses to use this flag, it must take the responsibility to make sure no access to the SHMMSG buffer after tpcall() fails, except for tpfree().

When TPNOCOPY is set for tpcall() flags and the send buffer is SHMMSG buffer, no safe copy will be done during message sending. After tpcall() succeeds, sender application has full access of the send buffer as normal. But if tpcall() fails in any circumstance, sender application cannot access the send buffer any more. In this case the recommended action is tpfree() the buffer, and this is the only safe operation on the buffer.

TPNOCOPY cannot be set for tpacall(), or tpacall() will fail with tperrno set to TPEINVAL.

Exceptions

In general, the tuxedo native request/reply messages will be transferred using shared memory queue (SHMQ) if the feature is available. But the IPC queue is used instead in the following cases:

For detailed information about configuration, please see Oracle Tuxedo Configuration.

Read-Only Optimization for XA

In general, Tuxedo will perform one-phase commit if only one participated group in a global transaction, but two-phase commit if more than one group. Two-phase commit indicates the Tuxedo sends one prepare request to each branch of the global transaction followed by one commit request per branch if all prepare requests are successful.

Given read-only optimization available, the Tuxedo will reduce one prepare request and TLOG writing for a global tightly coupled transaction by invoking one-phase commit on the reserved branch instead of two-phase commit.

If the tuxedo application is running on the database that supports read only optimization, such as Oracle database, the customer can take advantage of this feature when the application involves multiple groups. In addition, the branches must be tightly coupled for Oracle database which is a default property of the OPENINFO.

The typical scenario is that the participated groups connect to different RAC instances or use different database service. A typical UBB configuration is as below.

Listing 28 UBB Configuration
*RESROUCE
MODEL		SHM
OPTIONS		EECS
...
* MACHINES
"mach1"		LMID=L1
...
*GROUPS
GRP1		LMID=L1 GRPNO=10 TMSNAME="TMSORA1"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux1+ACC= P/scott/tiger +SesTM=120"
GRP2	LMID=L1 GRPNO=20 TMSNAME="TMSORA2"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux2+ACC= P/scott/tiger +SesTM=120"
*SERVERS
server1		SRVGRP=GRP1 SRVID=10 MIN=2
server2		SRVGRP=GRP2 SRVID=10 MIN=2
...

GRP1 uses net service orcl.tux1 to connect to the resource manager. The orcl.tux1 is configured to database service tux1 which is supported by RAC instance1. GRP2 uses net service orcl.tux2 to connect to the resource manager. The orcl.tux2 is configured to database service tux2 which is supported by RAC instance2. The server1 offers Tuxedo service svc1. The server2 offers Tuxedo service svc2. The transactional business A depends on svc1 and svc2 so it will involve server1 and server2.

Due to Read-only Optimization enabled, one prepare request is saved and TLOG writing is ignored. One-phase commit is done.

If the participated groups connect to same Oracle instance through same database service, it is better to enable the Common XID feature which will lead the global transaction into one phase commit. The common XID feature can ignore all prepare requests and TLOG writing so that it brings better performance than Read-only Optimization.

If the transactional business is definitely confirmed not to invoke read-only optimization, please do not enable read-only optimization to avoid negative impact on the performance. The typical scenario is that more than one resource manager are used in the business.

For detailed information about configuration, please see Oracle Tuxedo Configuration.

XA Affinity

Scenarios Recommended

It is recommended to enable this feature when Tuxedo server has multiple instances running on different Oracle RAC instances via the same Oracle database service.

As long as XA affinity is enabled, there is no need to use the rule of Oracle RAC routing specified by the environment variable TUXRACGROUPS, and this rule will be disabled.

The following picture illustrates the changes that will be made when XA affinity is enabled.

Limitations

Common XID

Common XID shares the coordinator's instance information and branch (common XID) to all participated groups. The servers in a participated group will reuse the common XID if they have the same instance information as the coordinator's. This feature brings significant performance improvement when a global transaction involves multiple groups, especially when all participated groups associate with the same database instance through the same database service.

Scenarios Recommended

Scenario A

Only one Oracle database instance is used in a Tuxedo application. A typical UBB configuration is as follows.

Listing 29 UBB Configuration
*RESROUCES
MODEL		SHM
OPTIONS		EECS
...
* MACHINES
"mach1"		LMID=L1
...
*GROUPS
GRP1	LMID=L1 GRPNO=10 TMSNAME="TMSORA1"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux1+ACC= P/scott/tiger +SesTM=120"
GRP2	LMID=L1 GRPNO=20 TMSNAME="TMSORA2"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux1+ACC= P/scott/tiger +SesTM=120"
*SERVERS
server1		SRVGRP=GRP1 SRVID=10 MIN=2
server2		SRVGRP=GRP2 SRVID=10 MIN=2
...

In the above configuration, GRP1 and GRP2 use the same net service (orcl.tux1, which is configured to an Oracle database) to connect resource manager. Server1 offers Tuxedo service svc1 and server2 offers Tuxedo service svc2. The transactional business A calls svc1 followed by svc2 so it will involve server1 and server2. When Common XID is enabled, all transactions of business A become one-phase commit.

Scenario B

All participated groups associate with the same database instance via the same database service when Tuxedo application is running on Oracle RAC.

The typical UBB sample is the same as Listing 29, while the net service orcl.tux1 is configured to Oracle RAC instance1 through database service tux1. When Common XID is enabled, all transactions of business A become one-phase commit.

Scenario C

Redundant servers or groups are configured when they are running on different Oracle RAC instances. Given this scenario, the XA affinity feature should be enabled too. It helps the business involves the servers/groups that associate same database instance via same database service with the coordinator.

Listing 30 UBB Configuration
*RESROUCES
MODEL		SHM
OPTIONS		EECS
...
* MACHINES
"mach1"		LMID=L1
...
*GROUPS
GRP1	LMID=L1 GRPNO=10 TMSNAME="TMSORA1"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux1+ACC= P/scott/tiger +SesTM=120"
GRP2	LMID=L1 GRPNO=20 TMSNAME="TMSORA2"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux1+ACC= P/scott/tiger +SesTM=120"
GRP3	LMID=L1 GRPNO=30 TMSNAME="TMSORA3"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux2+ACC= P/scott/tiger +SesTM=120"
*SERVERS
server1		SRVGRP=GRP1 SRVID=10 MIN=2
server2		SRVGRP=GRP2 SRVID=10 MIN=2
server3		SRVGRP=GRP3 SRVID=10 MIN=2
...

GRP1 and GRP2 use the same net service orcl.tux1 to connect the resource manager; orcl.tux1 is configured to database service tux1, which RAC instance1 supports. GRP3 uses net service orcl.tux2 to connect the resource manager; orcl.tux2 is configured to database service tux2, which RAC instance2 supports. Server1 offers Tuxedo service svc1; both server2 and server3 offer Tuxedo service svc2. The transactional business A calls svc1 and then svc2.

In general, business A may involve server1 and server2, or server1 and server3, because of Tuxedo load balance. When Common XID is enabled, the transactions that involve server1 and server2 become one-phase commit; when XA Affinity feature is enabled, business A always involves server1 and server2 so that all transactions of the business A would be one-phase commit.

Scenario D

Part of participated groups associate with the same instances through the same database service with the coordinator. In this scenario, it is better to enable both common XID and Read-Only Optimization features.

A typical UBB configuration is as follows.

Listing 31 UBB Configuration
*RESROUCES
MODEL		SHM
OPTIONS		EECS
...
* MACHINES
"mach1"	LMID=L1
...
*GROUPS
GRP1	LMID=L1 GRPNO=10 TMSNAME="TMSORA1"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux1+ACC= P/scott/tiger +SesTM=120"
GRP2	LMID=L1 GRPNO=20 TMSNAME="TMSORA2"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux1+ACC= P/scott/tiger +SesTM=120"
GRP3	LMID=L1 GRPNO=30 TMSNAME="TMSORA3"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux2+ACC= P/scott/tiger +SesTM=120"
*SERVERS
server1		SRVGRP=GRP1 SRVID=10 MIN=2
server2		SRVGRP=GRP2 SRVID=10 MIN=2
server3		SRVGRP=GRP3 SRVID=10 MIN=2
...

GRP1 and GRP2 use the same net service orcl.tux1 to connect the resource manager; orcl.tux1 is configured to database service tux1, which RAC instance1 supports. GRP3 uses net service orcl.tux2 to connect the resource manager; orcl.tux2 is configured to database service tux2, which RAC instance2 supports. Server1 offers Tuxedo service svc1; server2 offers Tuxedo service svc2; server3 offers Tuxedo service svc3. The transactional business B calls svc1, then svc2, and then svc3.

The business B involves server1/GRP1, server2/GRP2, and server3/GRP3. When common XID is enabled, the prepare request to GRP2 is saved. Given that Read-Only Optimization is enabled as well, the prepare request to GRP1 is saved as well and one-phase commit is done on GRP1, avoiding TLOG writing.

Limitations

FAN Integration

Recommendation for Configuration on Oracle Database

To benefit from Oracle FAN (Fast Application Notification), it is recommended to enable this feature anytime when Tuxedo works with Oracle RAC. Besides UBBCONFIG, set Oracle Database properly for the following configurations:

Recommendation for Non-XA Application

To monitor FAN event for the instance associated with the specific non-XA application server, $TUXDIR/lib/tuxociucb.so.1.0 should be deployed in $ORACLE_HOME/lib, and the name of this binary must be specified in ORA_OCI_UCBPKG environment variable.

To support TAF, follow the rules as below:

Note: -L option in servopts must be used for a non-XA server to indicate that the server will connect to the Oracle Database. Since the ECID is enabled when -L is specified, a new option -F is introduced into servopts to close ECID. The usage is -F noECID. The example is below.
Note: *SERVERS
Note: server1
Note: SRVGRP=GRP1 SRVID=1 ClOPT="-L libclntsh.so -F noECID"

Limitations

Single Group Multiple Branches (SGMB)

If a Tuxedo application is running on the Oracle RAC, you may want to take advantage of non-singleton database service, such as load balance, service failover, and so on.

Tuxedo groups can use RAC non-singleton service by enabling this feature. Given that the business may involve multiple groups, it is better to also enable Common XID and XA Affinity to achieve good performance.

A typical UBB configuration is as follows.

Listing 32 UBB Configuration
*RESROUCES
MODEL		SHM
OPTIONS		EECS
...
* MACHINES
"mach1"		LMID=L1
...
*GROUPS
GRP1	LMID=L1 GRPNO=10 TMSNAME="TMSORA1"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux3+ACC= P/scott/tiger +SesTM=120"
GRP2	LMID=L1 GRPNO=20 TMSNAME="TMSORA2"
		OPENINFO="Oracle_XA:ORACLE_XA+SqlNet=orcl.tux3+ACC= P/scott/tiger +SesTM=120"
*SERVERS
server1		SRVGRP=GRP1 SRVID=10 MIN=4
server2		SRVGRP=GRP2 SRVID=10 MIN=4
...

GRP1 and GRP2 use the same net service orcl.tux3 to connect the resource manager; orcl.tux3 is configured to database service tux3, which both RAC instance1 and instance2 support. Server1 offers Tuxedo service svc1 and server2 offers Tuxedo service svc2. The transactional business A calls svc1 and then svc2, and so involves both server1 and server2. Because orcl.tux3 is non-singleton database service, server1 copies associate with either instance1 or instance2, so do server2 copies.

SGMB can ensure business working well and ensure that business A transactions are distributed evenly on instance1 and instanc2.

Given that both Common XID and XA affinity are enabled, all business A transactions become one-phase commit.

Limitations

Direct Cross Domain Communication Leveraging RDMA

Scenarios Recommended

The feature provides the ability for client to directly access remote service across domain; it eliminates the bottleneck on GWTDOMAIN. When Tuxedo is under high concurrent remote access in different domains, this feature significantly improves throughput's performance.

Note: The following scenario is not recommended for this feature:
Note: The client accesses the remote service in remote domains, and work on it for a relatively short time. For example, tpinit() followed several tpcall(), then tpterm(). The overhead of creating/opening/closing for RDMA connection is much higher than the overhead on Unix IPC queue. Consequently, this feature cannot bring about obvious performance improvement in this scenario.

Oracle Tuxedo SDP Support

SDP can be used on all Tuxedo network communications, but it is not recommended for the scenario that is applicable to Direct Cross Node Communication Leveraging RDMA or Direct Cross Domain Communication Leveraging RDMA.

 


Running Oracle Tuxedo

There is a difference running Oracle Tuxedo on a non-Exalogic platform if Direct Cross Node Communication Leveraging RDMA is enabled. The tux_msgq_monitor must be started before booting an Oracle Tuxedo application. This section includes the following topics:

Start/Stop tux_msgq_monitor

Assistant Tools

Shell Scripts for Start/Stop Oracle Tuxedo

There are some shell scripts that simplify the startup/shutdown procedure. Using these tools, you can only run one command to start/stop both tux_msgq_monitor and an Oracle Tuxedo application. Before running these commands, ensure the environment variables TUXCONFIG, LD_LIBRARY_PATH and APPDIR are set properly.

For example, on the master node you can start/stop Oracle Tuxedo as follows:

On master node, there are two shell scripts:

tmboot.sh -i daemon_ip -d daemon_port -M shm_size -K shm_key [-l nlsaddr]

This script starts up tux_msgq_monitor, executes tmboot to start the Oracle Tuxedo application, and starts tlisten if option "-l" specified.

tmshut.sh

Stops both the Oracle Tuxedo application and tux_msgq_monitor.

On slave node, there are two shell scripts:

tlisten_start.sh -l nlsaddr -i daemon_ip -d daemon_port -M shm_size -K shm_key

This script starts the tux_msgq_monitor and tlisten.

tlisten_stop.sh

This script terminates tlisten and tux_msgq_monitor.
Note: In MP mode, for startup, you should run the commands in the following sequence:
Note: For shutdown, you should run commands in the following sequence:

 


Running Oracle Tuxedo on OVM

There are no special requirements for Oracle Tuxedo running on OVM.

Upgrade

There are no special requirements if you do not use any Exalogic optimization. For more information, see Oracle Tuxedo Interoperability Guide and Upgrading the Oracle Tuxedo System to 12c Release 1 (12.1.1).

Note: If any Exalogic optimization is specified in OPTIONS, you cannot perform a hot upgrade from any previous release.

 


Appendix

Terminology

SDP: Sockets Direct Protocol

Oracle Tuxedo installation

This section contains the following topics.

Configuration for Exalogic

Before the Oracle Tuxedo installation, you should understand the current state of the Exalogic environment.

It is assumed that you have completed all tasks described in the Oracle Exalogic Machine Owner's Guide, which discusses your data center site preparation, Oracle Exalogic machine commissioning, initial networking configuration including IP address assignments, and initial setup of the Sun ZFS Storage 7320 appliance.

Platform Requirements

Oracle Tuxedo optimizations can run on both Exalogic Linux and SPARC server. For more details, please see Oracle Tuxedo 12c Release 2 (12.1.3) Platform Data Sheets.

Choosing Oracle Tuxedo Home

We recommend that you can install the Oracle Tuxedo product binaries in one of the shares on Sun ZFS Storage 7320 appliance locations, so you can run Oracle Tuxedo on any Exalogic nodes by one binary copy.

Note: The share, which is a shared file system, must be accessible by all compute nodes. You can create a local user account for each node, and ensure it has the same UID and GID (to avoid permission access issues), or create NIS accounts for users.

Oracle Tuxedo must be installed in a different directory if you want to develop the Oracle Tuxedo plug-in interface with different implementation for each Oracle Tuxedo.

Start Installation

Oracle Tuxedo 12c Release 2 (12.1.3) installer is based on the Oracle Universal Installer (OUI). For more information, please see Installing the Oracle Tuxedo System.


  Back to Top       Previous  Next