Oracle Tuxedo on Exalogic Users Guide

     Previous  Next    Open TOC in new window    View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Oracle Tuxedo/Oracle Exalogic
Users Guide

This chapter contains the following topics:

 


Overview

This section contains the following topics.

About this Guide

This document introduces all Oracle Tuxedo optimizations for Exalogic. With this document, you can easily install, configure, and run Tuxedo on Exalogic.

About Oracle Exalogic

Oracle Exalogic is an Engineered Systems that integrates compute, networking and storage hardware with virtualization, operating system and management software. It provides breakthrough performance, reliability, availability, scalability and investment protection for the widest possible range of business application workloads.

About Oracle Tuxedo optimizations for Oracle Exalogic

From 11.1.1.3.0, Oracle Tuxedo provides many optimizations for Oracle Exalogic platforms; Table 1 lists the Exalogic supported features:

Table 1 Exalogic Supported Oracle Tuxedo Features
Feature Name
Oracle Tuxedo Version
Direct Cross Node Communication Leveraging RDMA
Oracle Tuxedo 11gR1 (11.1.1.3.0) or above
Direct Cross Domain Communication Leveraging RDMA
Oracle Tuxedo 12cR2 (12.1.3) or above
Self-Tuning Lock Mechanism
Oracle Tuxedo 11gR1 (11.1.1.3.0) or above
Oracle Tuxedo SDP Support
Oracle Tuxedo 11gR1 (11.1.1.3.0) or above
Shared Memory Interprocess Communication
Oracle Tuxedo 12cR1 (12.1.1) or above
Partial One Phase Read-Only Optimization for RAC
Oracle Tuxedo 12cR1 (12.1.1) or above
Shared Applications Staging
Oracle Tuxedo 12cR1 (12.1.1) or above
Tightly Coupled Transactions Spanning Domains
Oracle Tuxedo 12cR1 (12.1.1) or above
XA Transaction Affinity
Oracle Tuxedo 12cR2 (12.1.3) or above
Common XID
Oracle Tuxedo 12cR2 (12.1.3) or above
Single Group Multiple Branches (SGMB)
Oracle Tuxedo 12cR2 (12.1.3) or above
Failover/Failback across Database Instances
Oracle Tuxedo 12cR2 (12.1.3) or above
Load Balancing across RAC Instances
Oracle Tuxedo 12cR2 (12.1.3) or above
Concurrent Global Transaction Table Lock
Oracle Tuxedo 12cR2 (12.1.3) with Rolling Patch 040 or above

Note: From Oracle Tuxedo 12cR2 (12.1.3), all optimizations support both Exalogic Linux 64bit and SPARC 64 bit, except for "Direct Cross Node Communication Leveraging RDMA" and "Direct Cross Domain Communication Leveraging RDMA".
Note: Since Oracle Tuxedo 12.1.3.0.0 RP020 on Oracle Linux 32-bit platforms, all optimizations support Exalogic Linux 32bit, except for "Direct Cross Node Communication Leveraging RDMA" ,"Direct Cross Domain Communication Leveraging RDMA","Oracle Tuxedo SDP Support", and "Shared Applications Staging".
Note: For more information about these features, please see Tuxedo Optimizations on Exalogic.

Tuxedo Optimizations on Exalogic

Direct Cross Node Communication Leveraging RDMA

This is a new feature in Tuxedo 11.1.1.3.0, which can significantly improve the performance of Tuxedo application under MP mode.

In previous releases, messages between local client and remote server must go through bridge. For example, first the message will be sent to local bridge through IPC queue, next the local bridge sends it to remote bridge through network, then the remote bridge sends the message to server's IPC queue, finally the server retrieves the message from its IPC queue, so the Bridge will become a bottleneck under high concurrency. By utilizing the RDMA capabilities of Infiniband, Tuxedo 11.1.1.3.0 introduced a new feature of "Direct Cross Node Communication Leveraging RDMA", it provide the ability for local client to transfer message to remote server directly.

For more information about configuration, see Oracle Tuxedo Configuration.

Direct Cross Domain Communication Leveraging RDMA

In previous releases, messages between local domain and remote domain must go through domain gateways (GWTDOMAIN). For example, first the message will be sent to local GWTDOMAIN through IPC queue, next the local GWTDOMAIN sends it to remote GWTDOMAIN through network, then the remote GWTDOMAIN sends the message to server's IPC queue, finally the server retrieves the message from its IPC queue, so the domain gateways will become a bottleneck under high concurrency. In this release, if Direct Cross Domain Communication Leveraging RDMA is enabled in the TUXCONFIG file, the local client and remote server can skip domain gateways, and transfer message directly.

For more information about configuration, see Oracle Tuxedo Configuration.

Self-Tuning Lock Mechanism

This feature can adjust the value of SPINCOUNT dynamically for the best use of CPU cycle.

The Tuxedo bulletin board (BB) is a memory segment in which all the application configuration and dynamic processing information is held at run time. For some Tuxedo system operations (such as service name lookups and transactions), the BB must be locked for exclusive access: that is, it must be accessible by only one process. If a process or thread finds that the BB is locked by another process or thread, it retries, or spins on the lock for SPINCOUNT number of times (user level method via spin) before giving up and going to sleep on a waiting queue (system level method via system semaphore). Because sleeping is a costly operation, it is efficient to do some amount of spinning before sleeping.

Because the value of the SPINCOUNT parameter is application- and system-dependent, the administrator has to tune the SPINCOUT to be a proper value manually by observing the application throughput under different values of SPINCOUNT.

Self-Tuning Lock Mechanism takes the job of tuning automatically. It is designed to figure out a proper value of SPINCOUNT so that most requests to lock BB are completed by spinning instead of sleeping on a waiting queue.

The algorithm of Self-Tuning Lock Mechanism is improved in Oracle Tuxedo 12c Release 2 (12.1.3) to help the tuning more accurate than before.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Oracle Tuxedo SDP Support

One of the benefits of using InfiniBand based network hardware is the ability to utilize the socket direct protocol, or SDP. This protocol allows applications to communicate with each other via the normal socket interface but bypass the network processing associated with TCP/IP which includes things like ordering, fragmentation, timeouts, retries, and the like because the InfiniBand hardware takes care of those concerns. As well SDP can support zero copy transfers as the InfiniBand hardware is capable of directly transferring buffers from the caller's address space.

By utilizing SDP, Tuxedo applications can reduce the amount of CPU consumed for networking operations as well as increase the overall throughput of network operations. SDP can be used on all Tuxedo network connections including BRIDGE to BRIDGE communication, the domain gateway GWTDOMAIN for communication with other Tuxedo domains, for workstation and Jolt clients, and as well for communication with WebLogic Server via the WebLogic Tuxedo Connector.

For more information about configuration, see Oracle Tuxedo Configuration.

Shared Memory Interprocess Communication

Oracle Tuxedo 12c Release 2 (12.1.3) significantly enhances performance of Tuxedo applications on Exalogic with use of shared memory queues instead of IPC Message Queues for inter process communication on the same Tuxedo node. With the use of shared memory queues, the sender and receiver processes can exchange pre-allocated messages in shared memory, thus eliminating the need to copy messages several times before message reaches its intended target and resulting in much better throughput and lower latency.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Partial One Phase Read-Only Optimization for RAC

This feature takes advantage of the read-only optimization of resource manager for XA. Given two phase commit scenario, the prepare requests are synchronized to the participated groups except the reserved one. If all transaction branches in those groups are read-only, Tuxedo will do one-phase commit on the reserved one directly. It means one prepare request (to the reserved one) is saved and writing TLOG is ignored.

Transactions either within or across domains are supported, including global transaction across Tuxedo domain and WLS via WTC (in WLS 12.1.1 - Contact Oracle Support for a patch, or higher release of WLS).

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Shared Applications Staging

With Oracle Tuxedo 12c, one can share application directory (APPDIR) among many compute nodes of the storage appliance on an Exalogic system, making it easier to manage application deployment.

For more information about configuration, see Oracle Tuxedo Configuration.

Tightly Coupled Transactions Spanning Domains

In Oracle Tuxedo 11.1.1.3.0 or earlier, the transaction crossing domain is loosely coupled even if the branches of the transaction running on same database due to different global transaction identifiers (GTRIDs) are used in different domains. Since Oracle Tuxedo 12.1.1, common GTRID has been introduced in default to make branches within a global transaction crossing domains using common GTRID. The branches would be tightly coupled if they are running on same database (if the database allows).

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

XA Transaction Affinity

XA Transaction Affinity provides the ability to route all Oracle database requests within one global transaction to the same Oracle RAC instance when possible; no matter if the requests come from an Oracle Tuxedo application server or Oracle WebLogic Server. This feature can reduce the cost of redirecting database requests to a new Oracle RAC instance, and thus can improve overall application performance.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Common XID

In previous releases, for global transactions, each participating group has its own transaction branch, and a distinguished transaction branch identifier (XID) identifies each branch. If a global transaction involves multiple groups, Tuxedo adopts two-phase commit on each branch, taking the first participating group as the coordinator.

With the common XID (transaction branch identifier) feature in this release, Tuxedo shares the XID of the coordinator group with all other groups within the same global transaction. This is as opposed to each group having its own XID and thus requiring two-phase commit in earlier releases if multiple groups are participating.

Common XID eliminates the need to XA commit operations for groups that connect to the same Oracle RAC instance through the same service by using the coordinator branch directly.

In the cases where all groups in a global transaction use the coordinator branch directly, one-phase commit protocol (instead of two-phase commit protocol) is used, and thus avoids writing TLOG.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Single Group Multiple Branches (SGMB)

In previous releases, servers in the same participated group use the same transaction branch in a global transaction; if these serves connect to different instances on the same RAC, the transaction branch may fail and an XA error, XAER_AFFINITY, will be reported, meaning one branch cannot go through different instances. For this reason, Tuxedo groups can only use singleton RAC services. A DTP service (if the DTP option, -x in srvctl, is specified) or a service offered by only one instance could be a singleton RAC service.

In this release, this feature eliminates the need to use singleton RAC service when multiple servers in a server group participate in the same global transaction. If servers in the same server group and same global transaction happen to connect to different RAC instances, a different transaction branch is used. This enables such applications to perform load balancing across available RAC instancesWith.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Note: The transaction still fails if more than 16 instances are involved in a single group.

Failover/Failback across Database Instances

Fast Application Notification (FAN) is a facility provided by Oracle Database to allow database clients to know about changes in the state of the database. These notifications let an application respond proactively to events such as a planned outage of a RAC node or an imbalance in database load. Tuxedo provides support for FAN notifications by a new system server TMFAN that can monitor Oracle RAC instance and notify Tuxedo application server to establish a new Database connection in case of Database instance up or down.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Load Balancing across RAC Instances

Based on FAN notification, Tuxedo TMFAN server can receive load balancing advisories that include the load information of each RAC instance. If the change in advised load exceeds the threshold specified in the TMFAN command line switches, then Tuxedo request will be routed to the Tuxedo application server that has a lower Database load.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

Concurrent Global Transaction Table Lock

Oracle Tuxedo manages global transactions by maintaining a table of active global transactions and their participants in the Oracle Tuxedo bulletin board called the Global Transaction Table or GTT. As this table is accessed by multiple concurrent processes it must be protected with a semaphore. In the normal Oracle Tuxedo case, the bulletin board lock is used to serialize access to this table. However, under heavy transaction load, the contention for this lock can become substantial resulting in an artificial performance bottleneck.

The XPP moves the serialization of access to the GTT from the bulletin board lock to a number of other locks, one for accessing the GTT, and one for each entry in the GTT. This allows a much greater level of concurrency when accessing the GTT and eliminates this bottleneck.

For more information, see Using Oracle Tuxedo Advanced Performance Pack.

 


Oracle Tuxedo Configuration

This section introduces the basic Oracle Tuxedo feature configuration on Exalogic. For more information, see the Oracle Tuxedo 12c Release 2 (12.1.3) Release Notes and Setting Up an Oracle Tuxedo Application.

Direct Cross Node Communication Leveraging RDMA

The configuration for "Direct Cross Node Communication Leveraging RDMA" includes.

UBBCONFIG File

Direct Cross Node Communication Leveraging RDMA is only supported under MP mode. To enable this feature, you must specify EECS in OPTIONS, otherwise the message goes through the Bridge.

There is one attribute for Direct Cross Node Communication Leveraging RDMA in the *RESOURCE section.

EXALOGIC_SHARED_PATH

The directory name of Oracle Tuxedo file transfer. The function of EXALOGIC_SHARED_PATH here is the same as that of environment variable EXALOGIC_SHARED_PATH; however, at Tuxedo runtime, such environment variable has higher priority. EXALOGIC_SHARED_PATH must be a shared directory with read/write permissions for all Tuxedo nodes and can be specified in *RESOURCE section only if RDMA is enabled. There are five attributes for Direct Cross Node Communication Leveraging RDMA in the *MACHINES section.

RDMADAEMONIP

The IP address where the Msgq_daemon is bound. It must be configured, and must be an IPoIB address (not an Ethernet based IP address). You should configure one Msgq_daemon for one logic machine.

RDMADAEMONPORT

The port number where Msgq_daemon listens on. It must be configured.

RDMAQSIZE

The EMSQ queue size. The default value is 65536 bytes if not defined in the UBBCONFIG file.

RDMAQENTRIES

The EMSQ queue entry number, that is the maximum number of messages allowed in this queue. The default value is 64 if not defined in the UBBCONFIG file.

EXALOGIC_MSGQ_CACHE_SIZE

The entry number for Oracle Tuxedo EMSQ cache. The function of EXALOGIC_MSGQ_CACHE_SIZE here is the same as that of the environment variable EXALOGIC_MSGQ_CACHE_SIZE; however, the environment variable has the higher priority. The value must be between 32 and 2048 inclusive. EXALOGIC_MSGQ_CACHE_SIZE can be specified in *MACHINES only when RDMA is enabled. The default value is 32 if it is not defined in UBBCONFIG. In some scenarios, Tuxedo performance can be improved when increasing this number. For more details, please see Setting EXALOGIC_MSGQ_CACHE_SIZE.

After enable RDMA option in the *RESOURCES section. Attribute "TYPE" of *MACHINES section cannot be set, since by default, any machines in MP mode should be Exalogic machine (with the same type) to support RDMA feature.

You can also get/change the configuration via TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

Listing 1 shows an example UBBCONFIG File with Direct Cross Node Communication Leveraging RDMA enabled.

Listing 1 UBBCONFIG File Example with Direct Cross Node Communication Leveraging RDMA Enabled
*RESOURCES
IPCKEY		87654
MASTER		site1,site2
MAXACCESSERS		40
MAXSERVERS		40
MAXSERVICES		40
MODEL		MP
OPTIONS		LAN,EECS
LDBAL		Y
*MACHINES
slce04cn01		LMID=site1
		APPDIR="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp"
              TUXCONFIG="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp/tuxconfig"
		TUXDIR="/home/oracle/tuxedo12.1.1.0"
		UID=601
		GID=601
      			RDMADAEMONIP=192.168.10.1
      			RDMADAEMONPORT=9800
      		RDMAQSIZE=65536
      		RDMAQENTRIES=64
slce04cn02		LMID=site2
		APPDIR="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp/slave"
		       TUXCONFIG="/home/oracle/tuxedo12.1.1.0/samples/atmi/simpapp/slave/tuxconfig"
		TUXDIR="/home/oracle/tuxedo12.1.1.0"
		UID=601
		GID=601
      		RDMADAEMONIP=192.168.10.2
      		RDMADAEMONPORT=9800
      		RDMAQSIZE=65536
      		RDMAQENTRIES=64
*GROUPS
GROUP1
	LMID=site1		GRPNO=1		OPENINFO=NONE
GROUP2
	LMID=site2		GRPNO=2		OPENINFO=NONE
*NETWORK
site1	NADDR="//slce04cn01:5432"
	NLSADDR="//slce04cn01:5442"
site1	NADDR="//slce04cn02:5432"
	NLSADDR="//slce04cn02:5442"
*SERVERS
DEFAULT:
		CLOPT="-A"
simpserv		SRVGRP=GROUP2 SRVID=3
*SERVICES
TOUPPER

Setting Shell Limit for Memory Lock

The shared memory used by Msgq_Daemon will be locked into physical memory to avoid being paged to swap area, so it is necessary to set a proper value to memlock in /etc/securitylimits.conf.

Please use the following formula to get the minimum value for memlock:

[Msgq_daemon shared memory size]*2 + MAXACCESSERS *14 000 kb

Msgq_daemon shared memory size: The size of shared memory allocated by Msgq_daemon. For more information, see "Calculating Shared Memory Size for Msgq_daemon".

MAXACCESSERS: An attribute in the UBBCONFIG file.

For example:

Msgq_daemon shared memory size: 200*1024 kb

MAXACCESSERS: 100

200*1024*2 + 100 * 14000 = 1809600

Specify it in /etc/securitylimits.conf like that:

* hard memlock 1809600

* soft memlock 1809600

Setting Default Directory Name for File Transfer

Before starting Oracle Tuxedo, ensure that there is a shared directory for all Exalogic nodes when Direct Cross Node Communication Leveraging RDMA is enabled. Make sure that access permissions are properly set.

The default name is /u01/common/patches/tuxtmpfile, you can also set your own directory using the EXALOGIC_SHARED_PATH environment variable. It is used for Oracle Tuxedo file transfer. When the EMSQ is full, or the message size exceeds the queue size, Oracle Tuxedo puts this message into a temporary file under the /u01/common/patches/tuxtmpfile directory, and sends notification directly to the remote process queue. The remote process can then get the file as long as it receives the notification.

Direct Cross Domain Communication Leveraging RDMA

Using Direct Cross Domain Communication Leveraging RDMA requires UBBCONFIG file configuration.

Note: Direct Cross Domain Communication Leveraging RDMA requires you to enable Direct Cross Note Communication Leveraging RDMA at first.

UBBCONFIG File

To enable this feature, in RESOURCES section of UBBCONFIG, you must specify BYPASSDOM_ID, BYPASSDOM_SEQ_NUM, and BYPASSDOM_SHARED_DIR parameters, as well as EECS flag of OPTIONS parameter.

There is an optional attribute MAXDOMAINS, which specifies the maximum number of domains within one domain group. The default is 32.

You can also get or change the configuration via TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

Listing 2 shows a UBBCONFIG file example of enabling Direct Cross Domain Communication Leveraging RDMA.

Listing 2 UBBCONFIG File Example of Enabling Direct Cross Domain Communication Leveraging RDMA
*RESOURCES
IPCKEY		87654
MASTER		site1 
MAXACCESSERS		40
MAXSERVERS		40
MAXSERVICES		40 
MODEL		SHM
OPTIONS		EECS
LDBAL		Y
BYPASSDOM_ID bddomgrp1 
BYPASSDOM_SEQ_NUM 0
BYPASSDOM_SHARED_DIR "/nfs/bypassdom/bddomgrp1/shareddir" 
MAXDOMAINS 16
 
*MACHINES
slce04cn01		LMID=site1
    APPDIR="/home/oracle/tuxedo12.1.3.0/samples/atmi/simpapp"
    TUXCONFIG="/home/oracle/tuxedo12.1.3.0/samples/atmi/simpapp/tuxconfig" 
    TUXDIR="/home/oracle/tuxedo12.1.3.0"
    UID=601
    GID=601 
    RDMADAEMONIP="192.168.10.1" 
    RDMADAEMONPORT=9800 
    RDMAQSIZE=65536 
    RDMAQENTRIES=64
*GROUPS 
GROUP1		LMID=site1 GRPNO=1 OPENINFO=NONE
*SERVERS 
DEFAULT:
CLOPT="-A"
simpserv		SRVGRP=GROUP1 SRVID=3
*SERVICES 
 TOUPPER

Self-Tuning Lock Mechanism

For configuration information, see " Self-Tuning Lock Mechanism" in Using Oracle Tuxedo Advanced Performance Pack.

Oracle Tuxedo SDP Support

To enable Oracle Tuxedo SDP Support, you must specify EECS for OPTIONS in *RESOURCES section, and set the relevant configuration in UBBCONFIG file or DMCONFIG file.

You can also get or change the configuration via TM_MIB. For more information, see File Formats, Data Descriptions, MIBs, and System Processes Reference.

This section covers the following configurations:

MP

According to the requirements, MP should work inside IB clusters, i.e., both master and slave machines are inside IB cluster, so only consider SDP and IPoIB are used inside IB cluster, and in bootstrap phase, tmboot, tlisten, bsbridge and bridge are using Socket API to communicate with each other.

GWTDOMAIN

If the node running GWTDOMAIN has multiple network interfaces (multi-homed) with multiple IP addresses, it is better to use explicit IP address when configuring GWTDOMAIN in DMCONFIG file instead of host name. Typically, every Exalogic node has at least two types of network interface, i.e., IB interface and Ethernet interface, in order to facilitate to demonstrate how to configure GWTDOMAIN, just presume IB interface is bound to IP address IB_IP, and Ethernet interface with IP address ETH_IP.

Functionally, GWTDOMAIN acts as both server and client in role, as server, it will listen on a configured IP address and port number in the DMCONFIG file to accept connection request from other GWTDOMAIN, as client, it will initiate connection request to other GWTDOMAIN by policy configured in the DMCONFIG file.

WSL

/WS client

JSL

Configure JSL listen on SDP

Prefixed "sdp:" to the network address, and the network address must be an IPoIB address as shown in Listing 11.

Listing 11 JSL Listening on SDP UBBCONFIG File Configuration Example
*SERVERS
DEFAULT:        CLOPT="-A"
JSL             SRVGRP=WSGRP SRVID=1001
                CLOPT="-A -- -nsdp: //IB_IP: 11101 -m1 -M10 -x1"

WTC

To enable SDP connection between WTC and Oracle Tuxedo, do the following steps:

  1. Specify the NWAddr of the WTC service Local/Remote Access Points as follows:
  2. sdp://IB_IP:port

    It is the same as the GWTDOMAIN NWADDR configuration in the DMCONFIG file.

  3. Add additional Java Option “-Djava.net.preferIPv4Stack=true” to the java command line to start up WLS server.
Notes: If the WTC access point has SSL enabled, after configuring for the SDP, the SSL configuration is ignored.
Note: Only Weblogic Server 12c (12.1.1) and higher can connect to Oracle Tuxedo via SDP. For more information, see Enable IPv4 for SDP transport, NWAddr attribute for WTC local Tuxedo Domain configuration, and NWAddr attribute for WTC remote Tuxedo Domain configuration.

Shared Memory Interprocess Communication

For configuration information, see " Shared Memory Interprocess Communication" in Using Oracle Tuxedo Advanced Performance Pack.

Partial One Phase Read-Only Optimization for RAC

For configuration information, see " Partial One Phase Read-Only Optimization for RAC" in Using Oracle Tuxedo Advanced Performance Pack.

Database

The Oracle Tuxedo system uses the X/Open XA interface for communicating with the various resource managers. The XA Standard is widely supported in all the major database vendor products.

You can use SDP (Sockets Direct Protocol) for Oracle Database invocations. There is no special requirement for Oracle Tuxedo application.

Please configure the database to support InfiniBand, as described in Configuring SDP Protocol Support for Infiniband Network Communication to the Database Server in the Oracle Database Net Services Administrators Guide.

Note: The following SDP parameters affect performance when sending large data blocks to the database.

For example, you can set the two parameters in “/etc/modprobe.conf” on the server node as follows:

options ib_sdp sdp_zcopy_thresh=0 recv_poll=0

Choosing APPDIR

You can deploy your Oracle Tuxedo application to a shared directory on Exalogic in MP environment (named Shared Applications Staging) with the requirements that both EECS option and MP mode are set and enabled. Before booting the Oracle Tuxedo application, ensure the following parameters are set correctly in the UBBCONFIG file:

TUXCONFIG

The TUXCONFIG must be different for each node.

TLOGDEVICE

The TLOGDEVICE must be different for each node.

ULOGPFX

Set different path for ULOGPFX if you want to have a separate ULOG.

Access Permission for shared APPDIR

Users from different Exalogic nodes must have the same uid and gid of OS.

Besides above, each node had better use distinctive TMIFRSVR repository_file, standard output/error file, AUDITLOG file, and ALOGPFX to have a clear logging system. All applications should be set distinctive names to use the Shared Applications Staging feature better.

Listing 13 shows a UBBCONFIG file shared APPDIR example.

Listing 13 UBBCONFIG File Shared APPDIR
...
*MACHINES
slce04cn01 LMID=site1
          APPDIR="/home/oracle/tuxapp"
          TUXCONFIG="/home/oracle/tuxapp/tuxconfig_cn01"
          TUXDIR="/home/oracle/tuxedo11gR1"
          TLOGDEVICE=/home/oracle/tuxapp/TLOG1
ULOGPFX="/ home/oracle/tuxapp /ULOG_cn01"
          RDMADAEMONIP="192.168.10.1"
          RDMADAEMONPORT=9800
          RDMAQSIZE=1048576
          RDMAQENTRIES=1024
slce04cn02 LMID=site2
          APPDIR=" home/oracle/tuxapp"
          TUXCONFIG=" home/oracle/tuxapp/tuxconfig_cn02"
          TUXDIR="/home/oracle/tuxedo11gR1"
          TLOGDEVICE=/home/oracle/tuxapp/TLOG2
ULOGPFX="/home/oracle/tuxapp /ULOG_cn02"
          RDMADAEMONIP="192.168.10.2"
          RDMADAEMONPORT=9800
          RDMAQSIZE=1048576
          RDMAQENTRIES=1024

If SECURITY is set in the UBBCONFIG file, only MP domains with EECS enabled can share a common APPDIR.

Tightly Coupled Transactions Spanning Domains

For configuration information, see " Tightly Coupled Transactions Spanning Domains" in Using Oracle Tuxedo Advanced Performance Pack.

XA Transaction Affinity

For configuration information, see " XA Transaction Affinity" in Using Oracle Tuxedo Advanced Performance Pack.

Common XID

For configuration information, see " Common XID" in Using Oracle Tuxedo Advanced Performance Pack.

Single Group Multiple Branches (SGMB)

For configuration information, see " Single Group Multiple Branches (SGMB)" in Using Oracle Tuxedo Advanced Performance Pack.

Failover/Failback across Database Instances

For configuration information, see " Failover/Failback across Database Instances" in Using Oracle Tuxedo Advanced Performance Pack.

Load Balancing across RAC Instances

For configuration information, see " Load Balancing across RAC Instances" in Using Oracle Tuxedo Advanced Performance Pack.

Concurrent Global Transaction Table Lock

For configuration information, see " Concurrent Global Transaction Table Lock" in Using Oracle Tuxedo Advanced Performance Pack.

 


Best Practices to Optimize Performance

This section contains the following topics:

Direct Cross Node Communication Leveraging RDMA

Scenarios recommended

The feature provides the ability for client to directly access remote server, it eliminates the bottleneck on BRIDGE. When Tuxedo is under high concurrent remote access in MP mode, the throughput will have significant improvement if this feature is enabled in UBBCONFIG.

Note: The following scenario is not recommended for this feature:
Note: The client connects with the remote server through BRIDGE, and work on it for a relatively short duration. For example, tpinit() followed several tpcall(), then tpterm(). The overhead of creating/opening/closing for RDMA connection is much higher than it on Unix IPC queue. So it cannot have obvious performance improvement under this scenario.

Setting EXALOGIC_MSGQ_CACHE_SIZE

Each Oracle Tuxedo thread has an EMSQ runtime cache; the default entry number is 32. You can change it between 32 and 2048 using the environment EXALOGIC_MSGQ_CACHE_SIZE variable before the Oracle Tuxedo application starts, or setting it in UBBCONFIG. In some scenarios, increasing the number can improve Oracle Tuxedo performance, for example:

Calculating Shared Memory Size for Msgq_daemon

Using tmloadcf

To get the recommended value, please run tmloadcf -c ubb as shown in Listing 14.

Listing 14 UBBCONFIG File *MACHINES Section
*MACHINES
ex03	
LMID=site1
		...
RDMADAEMONIP="192.168.10.1"
RDMADAEMONPORT=9800
RDMAQSIZE=100000
RDMAQENTRIES=100
MAXACCESSERS=100
...
ex03_1	LMID=site2
...
RDMADAEMONIP="192.168.10.2"
RDMADAEMONPORT=9800
RDMAQENTRIES=1000
MAXACCESSERS=200
...
ex04	LMID=site3
...
RDMADAEMONIP="192.168.10.3"
RDMADAEMONPORT=9800
RDMAQSIZE=100000
RDMAQENTRIES=100
MAXACCESSERS=200
MAXSERVERS=100
...
ex04_1	LMID=site4
		...
	RDMADAEMONIP="192.168.10.4"
      	RDMADAEMONPORT=9800
      	RDMAQSIZE=1000000
      	RDMAQENTRIES=1000
MAXACCESSERS=100
...

Run command tmloadcf -c ubb, get the output shown in Listing 15.

Listing 15 tmloadcf -c ubb Output Example
Ipc sizing (minimum /T values only) ...
                  Fixed Minimums Per Node
SHMMIN: 1
SHMALL: 1
SEMMAP: SEMMNI
                  Variable Minimums Per Node
SEMUME,           A                             SHMMAX
                        SEMMNU,           *                               *
Node       SEMMNS  SEMMSL  SEMMSL  SEMMNI  MSGMNI  MSGMAP  SHMSEG  RCDMSZ
-----       -----  -----    ----   -------  -----   ------  ------  ------
ex03        126      15     120   A + 2      26      52   1178K    220M
ex04        221      28     220   A + 1      26      52   1340K    340M
ex04_1      121      15     120   A + 1      26      52   1178K   1300M
ex03_1      221      28     220   A + 1      25      50   1340K   2500M

RCDMSZ increases linearly when any of following items configured in UBB increases:

Adjusting Shared memory size:

After getting the RCDMSZ by tmloadcf, you can adjust the actual size according to the following runtime factors:

Note: For detailed information about configuration, please see Direct Cross Node Communication Leveraging RDMA in Oracle Tuxedo Configuration.

Self-Tuning Lock Mechanism

For best practices, see " Self-Tuning Lock Mechanism" in Using Oracle Tuxedo Advanced Performance Pack.

Shared Memory Interprocess Communication

For best practices, see " Shared Memory Interprocess Communication" in Using Oracle Tuxedo Advanced Performance Pack.

Partial One Phase Read-Only Optimization for RAC

For best practices, see " Partial One Phase Read-Only Optimization for RAC" in Using Oracle Tuxedo Advanced Performance Pack.

XA Transaction Affinity

For best practices, see " XA Transaction Affinity" in Using Oracle Tuxedo Advanced Performance Pack.

Common XID

For best practices, see " Common XID" in Using Oracle Tuxedo Advanced Performance Pack.

Failover/Failback across Database Instances

For best practices, see " Failover/Failback across Database Instances" in Using Oracle Tuxedo Advanced Performance Pack.

Load Balancing across RAC Instances

For best practices, see " Load Balancing across RAC Instances" in Using Oracle Tuxedo Advanced Performance Pack.

Single Group Multiple Branches (SGMB)

For best practices, see " Single Group Multiple Branches (SGMB)" in Using Oracle Tuxedo Advanced Performance Pack.

Direct Cross Domain Communication Leveraging RDMA

Scenarios Recommended

The feature provides the ability for client to directly access remote service across domain; it eliminates the bottleneck on GWTDOMAIN. When Tuxedo is under high concurrent remote access in different domains, this feature significantly improves throughput's performance.

Note: The following scenario is not recommended for this feature:
Note: The client accesses the remote service in remote domains, and work on it for a relatively short time. For example, tpinit() followed several tpcall(), then tpterm(). The overhead of creating/opening/closing for RDMA connection is much higher than the overhead on Unix IPC queue. Consequently, this feature cannot bring about obvious performance improvement in this scenario.

Oracle Tuxedo SDP Support

SDP can be used on all Tuxedo network communications, but it is not recommended for the scenario that is applicable to Direct Cross Node Communication Leveraging RDMA or Direct Cross Domain Communication Leveraging RDMA.

 


Running Oracle Tuxedo

There is a difference running Oracle Tuxedo on a non-Exalogic platform if Direct Cross Node Communication Leveraging RDMA is enabled. The tux_msgq_monitor must be started before booting an Oracle Tuxedo application. This section includes the following topics:

Start/Stop tux_msgq_monitor

Assistant Tools

Shell Scripts for Start/Stop Oracle Tuxedo

There are some shell scripts that simplify the startup/shutdown procedure. Using these tools, you can only run one command to start/stop both tux_msgq_monitor and an Oracle Tuxedo application. Before running these commands, ensure the environment variables TUXCONFIG, LD_LIBRARY_PATH and APPDIR are set properly.

For example, on the master node you can start/stop Oracle Tuxedo as follows:

On master node, there are two shell scripts:

tmboot.sh -i daemon_ip -d daemon_port -M shm_size -K shm_key [-l nlsaddr]

This script starts up tux_msgq_monitor, executes tmboot to start the Oracle Tuxedo application, and starts tlisten if option "-l" specified.

tmshut.sh

Stops both the Oracle Tuxedo application and tux_msgq_monitor.

On slave node, there are two shell scripts:

tlisten_start.sh -l nlsaddr -i daemon_ip -d daemon_port -M shm_size -K shm_key

This script starts the tux_msgq_monitor and tlisten.

tlisten_stop.sh

This script terminates tlisten and tux_msgq_monitor.
Note: In MP mode, for startup, you should run the commands in the following sequence:
Note: For shutdown, you should run commands in the following sequence:

 


Running Oracle Tuxedo on OVM

There are no special requirements for Oracle Tuxedo running on OVM.

Upgrade

There are no special requirements if you do not use any Exalogic optimization. For more information, see Oracle Tuxedo Interoperability Guide and Upgrading the Oracle Tuxedo System to 12c Release 1 (12.1.1).

Note: If any Exalogic optimization is specified in OPTIONS, you cannot perform a hot upgrade from any previous release.

 


Appendix

Terminology

SDP: Sockets Direct Protocol

Oracle Tuxedo installation

This section contains the following topics.

Configuration for Exalogic

Before the Oracle Tuxedo installation, you should understand the current state of the Exalogic environment.

It is assumed that you have completed all tasks described in the Oracle Exalogic Machine Owner's Guide, which discusses your data center site preparation, Oracle Exalogic machine commissioning, initial networking configuration including IP address assignments, and initial setup of the Sun ZFS Storage 7320 appliance.

Platform Requirements

Oracle Tuxedo optimizations can run on both Exalogic Linux and SPARC server. For more details, please see Oracle Tuxedo 12c Release 2 (12.1.3) Platform Data Sheets.

Choosing Oracle Tuxedo Home

We recommend that you can install the Oracle Tuxedo product binaries in one of the shares on Sun ZFS Storage 7320 appliance locations, so you can run Oracle Tuxedo on any Exalogic nodes by one binary copy.

Note: The share, which is a shared file system, must be accessible by all compute nodes. You can create a local user account for each node, and ensure it has the same UID and GID (to avoid permission access issues), or create NIS accounts for users.

Oracle Tuxedo must be installed in a different directory if you want to develop the Oracle Tuxedo plug-in interface with different implementation for each Oracle Tuxedo.

Start Installation

Oracle Tuxedo 12c Release 2 (12.1.3) installer is based on the Oracle Universal Installer (OUI). For more information, please see Installing the Oracle Tuxedo System.


  Back to Top       Previous  Next