6 Setting Up Geographic Redundancy

This chapter describes how to set up geographically redundant site sets for Oracle Communications Services Gatekeeper. The attributes and operations supporting geographic redundancy are explained, and a configuration workflow is provided.

Understanding Geographic Redundancy

The Geographic Redundancy service replicates data between geographically distant sites so that applications can switch to a different site to process traffic (for example, in case of the catastrophic failure of one). All geographically-redundant sites include all configuration data (account information, system Service Level Agreements (SLAs), and budgets) necessary for SLA enforcement available. For more overview information about geographic redundancy, see ”Redundancy, Notifications, Load Balancing, and High Availability” in Services Gatekeeper Concepts.

Each set of geographically redundant sites has a single geomaster system, which is connected to a slave system or systems. These sites are frequently set up in geomaster-slave pairs, but a single geomaster can have any number of slaves. Each geographic site has a name which is used for looking up data relevant to the site systems. The names of the slaves sites are defined in the geomaster system. Depending on how you configure geographic redundancy, each application may have to configure communication with all sites, or just one. Subsequently, the application needs to register for subscriptions on one site only. During a site failure, the other site automatically handles additional subscription messages.

Understanding the Geographic Redundancy Configuration Options

You have these options for setting up geographic redundancy:

  • Configuring basic geographic redundancy so that all applications must register with each site. This option is most practical if you do not have too many applications. This option requires the most up-front configuration, but requires less database storage space. See "Configuring Basic Geographic Redundancy" for instructions.

  • Configuring geographic redundancy so that applications need only register with one site. This option is most practical if you have a lot of applications (or sites) to administer. This option requires less up-front configuration, but requires more database space. See "Configuring Geographic Redundancy Without Registering Applications at Every Site" for instructions.

    This style of geographic redundancy is only available on a MySQL database and for the following types of SMS, MMS, or terminal location traffic:

    • SMS

      • ParlayX

      • OneAPI

      • Native SMPP

    • MMS

      • ParlayX

      • OneAPI

      • Native MM7

    • Terminal Location

      • ParlayX

Best Practices for Domain-Specific GeoRedundant Services

The following best practices are recommended when configuring domain-specific georedundant services:

  • When configuring a domain-specific georedundant service on two domains that share hardware, ensure the following

    • The domains are set up as separate installations with separate databases.

    • Each domain has a unique cluster configuration.

  • To avoid any issues when a change is made to the configuration, do the following when you configure Coherence clusters:

    • Use unicast type of addressing.

    • Add Well Known Addresses (WKA) for the NT servers.

    • Configure Coherence servers for each NT server, if necessary.

About Unicast Addressing with WKA

By using unicast addressing and WKA when you configure the Coherence cluster, you configure servers to use specific ip-addresses for Coherence. Doing so enables you to have control of which servers can join the Coherence cluster.

With multicast addressing, there is no restriction on which servers are allowed to join a cluster. Any server in your network that uses the same multicast address can possibly join the cluster. This scenario at times may lead to a server joining the wrong Coherence cluster.

Unicast addressing with WKA is important for a geo-redundant setup and generally recommended for every installation of Services Gatekeeper.

About Recovering from a Geographic Redundant Failure

If a geographically redundant domain fails:

  • An application's connection and login time-out or receive an error response from domain load balancer.

  • The application then logs in to the peer geographically redundant domain (if not already logged in) and resumes sending and receiving traffic.

  • SLA and policy requests, and budget limits optionally continue to be enforced in peer domain. You control this behavior with configuration settings.

  • Applications should periodically attempt to login to the failed domain.

  • When the failed domain recovers, the application logs back into the original domain, and logs out of the peer.

  • Alternatively, an application could continue to use the peer domain until reset by the operator.

Configuring Basic Geographic Redundancy

These tasks are required to configure basic geographic redundancy. You must do all of them at each site:

Configuring Both Sites for Geographic Redundancy

To use geographic redundancy, each site must be appropriately configured using GeoRedundantServiceMBean. Using this service, you:

  • Define the ID of the local site in the GeoSiteId field.

  • Define the number of failed attempts to reach a remote site before an alarm should be raised in the RemoteSiteReachabilityAlarmThreshold field.

  • Define the remote site using the setSiteAddress method.

See "GeoRedundantService Reference" for details.

Defining One Site as the GeoMaster

One site of the site pair must be designated the geomaster site using the GeoStorageServiceMBean. You define geomaster site using the GeoMasterSiteId field.

See "GeoStorageService Reference" for details.

Configuring Geographic Redundancy Without Registering Applications at Every Site

This sections explains how to configure geographic redundancy so that applications are only required to register for notifications at a single geographically redundant site. Delivery receipts that are delivered to any site and are replicated across all sites.

Each geographically dispersed site hosts a Services Gatekeeper domain (collection of managed servers and clusters). You deploy and manage these geographically redundant sites independently. It is possible to distribute load across these domains. That is, multiple active geographically redundant domains are supported so that applications may connect to any of the geographically redundant domains. These geographically redundant domains synchronize traffic data, and enforce SLAs and Policies by periodically synchronizing traffic data between each other. The interval between these synchronizations is based on a configurable synchronization interval.

The geographically redundant implementations:

  • Are deployed and managed independently

  • Perform bi-directional data synchronization between geographic domain pairs

  • Have failover monitoring mechanisms, and issue fire alarms when they detect problems

You configure this geographically redundant option by replicating database tables that contain messaging information among the geographically redundant sites, and use the WebLogic write-through coherence mode. Once these tasks are complete, your applications need only apply for subscriptions from one of the geographically redundant sites.

Understanding the Configuration Prerequisites

Before starting the configuration process:

  • Services Gatekeeper must be installed on all of the hosts.

  • A MySQL database must be installed on all of the hosts

    The Services Gatekeeper domains must be installed on all of the hosts.

    You followed the instruction in "Configuring Basic Geographic Redundancy" and configured basic geographic redundancy.

Understanding the Configuration Example

The configuration instructions in this section use the example systems listed here. Both of these systems have Services Gatekeeper and MySQL 5.6 installed, and have been configured as master/slaves in a geographically redundant cluster:

  • geotest1, with an IP address of 10.161.159.189

  • geotest2 with an IP address of 10.161.159.166

This guide uses the geotest1 and geotest2 host names in the configuration steps for clarity; however it is best practice guideline to use IP addresses in a production system.

Table 6-1 lists values that are used in the configuration example.

Table 6-1 Geographically Redundant Configuration Elements

Configuration Element geotest1 Value geotest2 Value

db_name

ocsg60_geosite1

ocsg60_geosite2

GeoRedundantService.GeoSiteId

site1

site2

GeoRedundantService.getSiteAddress(...)

(site2) ->t3://10.161.159.166:9001,10.161.159.166:9101

(site1) -> t3://10.161.159.189:9001,10.161.159.189:9101

GeoStorageService.GeoMasterSiteId

site1

site2

AccountService.SessionRequired

Disabled

Disabled

SMPPServiceMBean.ConnectionBasedRouting

False

False

SMPPServiceMBean.OfflineMO

True

True

SMPPServiceMBean.SkipAddressrangeCheckInBindRequest

True

True


Configuring the MySQL Database

Configure the MySQL servers on your master/slave nodes as both masters and slaves. You do this by using the my.cnf file. The example my.cnf files in this section shows you how.

Example 6-1 lists the my.cnf configuration setting used by geotest1:

Example 6-1 Example geotest1 my.cnf file

[mysqld]
server-id              = 1
log_bin                = /var/lib/mysql/mysql-bin.log
#This is the database to log statements for
binlog-do-db           = ocsg60_geosite1
 
#The remote site uses a different name, need to map it to our local name
replicate-rewrite-db   = ocsg60_geosite2->ocsg60_geosite1
 
##Tables to replicate (local db name, ie after rewrite)
#SMS
replicate-do-table     = ocsg60_geositea.native_smpp_session_store
replicate-do-table     = ocsg60_geositea.pl_sms_delivery_notif
replicate-do-table     = ocsg60_geositea.pl_sms_offline_notif
replicate-do-table     = ocsg60_geositea.pl_sms_online_notif
replicate-do-table     = ocsg60_geositea.pl_sms_smpp_mt_sms
replicate-do-table     = ocsg60_geositea.pl_sms_smpp_mt_dr
replicate-do-table     = ocsg60_geositea.pl_sms_smpp_mo_sms
 
 
#MMS
replicate-do-table     = ocsg60_geositea.pl_mms_mt_dr_mms
replicate-do-table     = ocsg60_geositea.pl_mms_mo_mms
replicate-do-table     = ocsg60_geositea.pl_mms_mo_content_mms
replicate-do-table     = ocsg60_geositea.pl_legacy_mms_mt
replicate-do-table     = ocsg60_geositea.pl_mms_dr_subscribe
replicate-do-table     = ocsg60_geositea.pl_mms_offline_notif
replicate-do-table     = ocsg60_geositea.pl_mms_online_notif
replicate-do-table     = ocsg60_geositea.pl_legacy_mm7_notif
replicate-do-table     = ocsg60_geositea.pl_legacy_mms_status
replicate-do-table     = ocsg60_geositea.pl_legacy_mms_vas_id
replicate-do-table     = ocsg60_geositea.pl_legacy_mms_vasp_id
 
#TL
replicate-do-table     = ocsg60_geositea.pl_tl_mlp_trigger_info

Example 6-2 lists the my.cnf configuration file used by geotest2:

Example 6-2 Example geotest2 my.cnf File

[mysqld]
server-id              = 2
log_bin                = /var/lib/mysql/mysql-bin.log
 
#The database to log statements for
binlog-do-db           = ocsg60_geosite2
 
#The remote site uses a different name, need to map it to our local name
replicate-rewrite-db   = ocsg60_geosite1->ocsg60_geosite2
 
##Tables to replicate (local db name, ie after rewrite)
#SMS
replicate-do-table     = ocsg60_geositeb.native_smpp_session_store
replicate-do-table     = ocsg60_geositeb.pl_sms_delivery_notif
replicate-do-table     = ocsg60_geositeb.pl_sms_offline_notif
replicate-do-table     = ocsg60_geositeb.pl_sms_online_notif
replicate-do-table     = ocsg60_geositeb.pl_sms_smpp_mt_sms
replicate-do-table     = ocsg60_geositeb.pl_sms_smpp_mt_dr
replicate-do-table     = ocsg60_geositeb.pl_sms_smpp_mo_sms
 
 
#MMS
replicate-do-table     = ocsg60_geositeb.pl_mms_mt_dr_mms
replicate-do-table     = ocsg60_geositeb.pl_mms_mo_mms
replicate-do-table     = ocsg60_geositeb.pl_mms_mo_content_mms
replicate-do-table     = ocsg60_geositeb.pl_legacy_mms_mt
replicate-do-table     = ocsg60_geositeb.pl_mms_dr_subscribe
replicate-do-table     = ocsg60_geositeb.pl_mms_offline_notif
replicate-do-table     = ocsg60_geositeb.pl_mms_online_notif
replicate-do-table     = ocsg60_geositeb.pl_legacy_mm7_notif
replicate-do-table     = ocsg60_geositeb.pl_legacy_mms_status
replicate-do-table     = ocsg60_geositeb.pl_legacy_mms_vas_id
replicate-do-table     = ocsg60_geositeb.pl_legacy_mms_vasp_id
 
#TL
replicate-do-table     = ocsg60_geositeb.pl_tl_mlp_trigger_info

Configuring the Services Gatekeeper Cache

Change the relevant Services Gatekeeper cache configuration so that data is read and written directly to the database instead of being cached.

This server-cache configuration guarantees that for example, when you send an SMS message, the correlation data is available to both geographically-redundant sites when the delivery receipt is received. There is a trade off in performance.

Replicate the following database tables for geographic redundancy, both in MySQL and in the Services Gatekeeper cache:

  • native_smpp_session_store

  • pl_legacy_mm7_notif

  • pl_legacy_mms_mt

  • pl_legacy_mms_status

  • pl_legacy_mms_vas_id

  • pl_legacy_mms_vasp_id

  • pl_mms_dr_subscribe

  • pl_mms_mo_content_mms

  • pl_mms_mo_mms

  • pl_mms_mt_dr_mms

  • pl_mms_offline_notif

  • pl_mms_online_notif

  • pl_sms_delivery_notif

  • pl_sms_offline_notif

  • pl_sms_online_notif

  • pl_sms_smpp_mo_sms

  • pl_sms_smpp_mt_dr

  • pl_sms_smpp_mt_sms

  • pl_tl_mlp_trigger_info

Replicating SMS Configuration Database Tables

To replicate SMS messaging database tables across geographically redundant sites:

  1. Navigate to domain_home/config/store_schema of your administration server.

  2. Expand the com.bea.wlcp.wlng.plugin.sms.common.store_release.jar file.

  3. Open the wlng-cachestore-config-extensions.xml file for editing.

  4. Make these changes for the type of application you are using:

    • OneAPI Applications:

      Change: wlng.db.wt.plugin.sms.common.sms_delivery_notif_store

      To: geo.wlng.local.wt.plugin.sms.common.sms_delivery_notif_store

    • OneAPI and Parlay X applications:

      Change: wlng.db.wt.plugin.sms.common.sms_offline_notif_store

      To: geo.wlng.local.wt.plugin.sms.common.sms_offline_notif_store

      And

      Change: wlng.dg.wt.plugin.sms.common.sms_online_notif_store

      To: geo.wlng.local.wt.plugin.sms.common.sms_online_notif_store

  5. Save and close wlng-cachestore-config-extensions.xml file.

  6. If you use Extended Web Service or Binary SMS applications, expand the oracle.ocsg.plugin.sms.px21.smpp.store_release.jar file.

    • Change: wlng.db.wt.plugin.sms.common.sms_binary_notif_store

    • To: geo.wlng.local.wt.plugin.sms.common.sms_binary_notif_store

  7. Stop, and restart all NT servers in your Services Gatekeeper implementation.

Replicating MMS Configuration Database Tables

To replicate MMS messaging database tables across geographically redundant sites:

  1. Navigate to domain_home/domain_name/config/store_schema.

  2. For OneAPI applications, expand the com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_release.jar file.

  3. Open the wlng-cachestore-config-extensions.xml file for editing.

  4. Make these changes:

    • Change: wlng.db.wt.plugin.mms.common.mms_dr_subscribe_store

    • To: geo.wlng.local.wt.plugin.mms.common.mms_dr_subscribe_store

  5. Save and close the file.

  6. For OneAPI and Parlay X 2.1 applications, expand the com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_release.jar file.

  7. Open the wlng-cachestore-config-extensions.xml file for editing.

  8. Make these changes:

    • Change wlng.db.wt.plugin.mms.common.mms_offline_notif_store

    • To: geo.wlng.local.wt.plugin.mms.common.mms_offline_notif_store

    and

    • Change: wlng.db.wt.plugin.mms.common.mms_online_notif_store

    • To: geo.wlng.local.wt.plugin.mms.common.mms_online_notif_store

  9. Save and close the file.

  10. Stop, and restart all NT servers in your Services Gatekeeper implementation.

Replicating Terminal Location Configuration Database Tables

The terminal location pl_tl_mlp_trigger_info database table stores both user and traffic information which make it impractical to replicate using the Service Gatekeeper geographical redundancy feature. Instead, use the MySQL database table replication feature to replicate this table. See your MySQL documentation for details. If you use a different database, see that product documentation for instructions on how to replicate database tables.

Configuring Services Gatekeeper MBeans

See Table 6-1 for the list of MBeans and the settings required.

Configuring Cache Types

Manually modify these storage schema jar files for geographic redundancy:

  • oracle.ocsg.plugin.sms.native.smpp.store_6.0.0.0.jar

  • com.bea.wlcp.wlng.plugin.multimediamessaging.mm7.store_6.0.0.0.jar

  • com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_6.0.0.0.jar

  • com.bea.wlcp.wlng.plugin.sms.common.store_6.0.0.0.jar

  • com.bea.wlcp.wlng.plugin.terminallocation.mlp.store_6.0.0.0.jar

  • oracle.ocsg.plugin.sms.px21.smpp.store_6.0.0.0.jar

Protecting the Modified MBean Files

In order to keep subsequent patches from overwriting the jar files you modified in "Configuring Cache Types", protect them with these steps:

Run these commands on each file in sequence:

% jar xf store_schema_filename.jar wlng-cachestore-config-extensions.xml
% mv wlng-cachestore-config-extensions.xml store_schema_filename.xml
% chmod 640 chmod 640 store_schema_filename.xml.xml

For example:

% jar xf com.bea.wlcp.wlng.plugin.multimediamessaging.mm7.store_6.0.0.0.jar wlng-cachestore-config-extensions.xml
% mv wlng-cachestore-config-extensions.xml com.bea.wlcp.wlng.plugin.multimediamessaging.mm7.store_6.0.0.0.xml
% chmod 640 chmod 640 com.bea.wlcp.wlng.plugin.multimediamessaging.mm7.store_6.0.0.0.xml

Tip:

Modify these files in your administration server then copy them to each managed server to ensure that each server has identical configuration.

Note:

When changing cache configuration you MUST do a full cluster restart (NOT a rolling restart) to avoid having nodes with conflicting cache configuration. Follow these steps:
  1. Stop all nodes in the cluster.

  2. Start the administration server.

  3. Start the rest of the servers in the domain.

Changing the type_id of Tables Replicated by MySQL

For each table that is replicated by MySQL, you must change the type_id (cache_name for native_smpp_session_store table) to match the value for wlng.db.direct in the wlng-cachestore-config-extensions.xml file.

This example changes the type_id for the pl_legacy_mm7_notif table:

  1. Open com.bea.wlcp.wlng.plugin.multimediamessaging.mm7.store_6.0.0.0.xml for editing.

  2. Search for the table name (pl_legacy_mm7_notif).

  3. Change the value for type_id from wlng.db.wt.plugin.mms.legacy.mm7.mo_notif_store to wlng.db.direct.plugin.mms.legacy.mm7.mo_notif_store.

Table 6-2 lists the table names and type_ids.

Table 6-2 type_id values for MySQL Tables

File Table Name Type ID

oracle.ocsg.plugin.sms.native.smpp.store_6.0.0.0.xml

native_smpp_session_store

wlng.db.direct.native.smpp.sessionInfo.store

oracle.ocsg.plugin.sms.native.smpp.store_6.0.0.0.xml

pl_legacy_mm7_notif

wlng.db.direct.plugin.mms.legacy.mm7.mo_notif_store

oracle.ocsg.plugin.sms.native.smpp.store_6.0.0.0.xml

pl_legacy_mms_mt

wlng.db.direct.plugin.mms.legacy.mm7.mt_mm7_state

oracle.ocsg.plugin.sms.native.smpp.store_6.0.0.0.xml

pl_legacy_mms_status

wlng.db.direct.plugin.mms.legacy.mm7.mo_status_report

oracle.ocsg.plugin.sms.native.smpp.store_6.0.0.0.xml

pl_legacy_mms_vas_id

wlng.db.direct.plugin.mms.legacy.mm7.mt_mm7_vas_id

oracle.ocsg.plugin.sms.native.smpp.store_6.0.0.0.xml

pl_legacy_mms_vasp_id

wlng.db.direct.plugin.mms.legacy.mm7.mt_mm7_vasp_id

com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_6.0.0.0.xml

pl_mms_dr_subscribe

wlng.db.direct.plugin.mms.common.mms_dr_subscribe_store

com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_6.0.0.0.xml

pl_mms_mo_content_mms

wlng.db.direct.plugin.mms.common.content.mo_mms_store

com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_6.0.0.0.xml

pl_mms_mo_mms

wlng.db.direct.plugin.mms.common.mo_mms_store

com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_6.0.0.0.xml

pl_mms_mt_dr_mms

wlng.db.direct.plugin.mms.common.deliveryreport.mt_mms_store

com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_6.0.0.0.xml

pl_mms_offline_notif

wlng.db.direct.plugin.mms.common.mms_offline_notif_store

com.bea.wlcp.wlng.plugin.multimediamessaging.px21.mm7.store_6.0.0.0.xml

pl_mms_online_notif

wlng.db.direct.plugin.mms.common.mms_online_notif_store

com.bea.wlcp.wlng.plugin.sms.common.store_6.0.0.0.xml

pl_sms_delivery_notif

wlng.db.direct.plugin.sms.smpp.delivery_notif_store

com.bea.wlcp.wlng.plugin.sms.common.store_6.0.0.0.xml

pl_sms_offline_notif

wlng.db.direct.plugin.sms.common.sms_offline_notif_store

com.bea.wlcp.wlng.plugin.sms.common.store_6.0.0.0.xml

pl_sms_online_notif

wlng.db.direct.plugin.sms.common.sms_online_notif_store

oracle.ocsg.plugin.sms.px21.smpp.store_6.0.0.0.jar

pl_sms_smpp_mo_sms

wlng.db.direct.plugin.sms.smpp.mo_sms_store

oracle.ocsg.plugin.sms.px21.smpp.store_6.0.0.0.jar

pl_sms_smpp_mt_dr

wlng.db.direct.tc.plugin.sms.smpp.mt_dr_store

oracle.ocsg.plugin.sms.px21.smpp.store_6.0.0.0.jar

pl_sms_smpp_mt_sms

wlng.db.wb.direct.plugin.sms.smpp.mt_sms_store

com.bea.wlcp.wlng.plugin.terminallocation.mlp.store_6.0.0.0.xml

pl_tl_mlp_trigger_info

wlng.db.direct.plugin.tl.mlp.trigger_info


Configuring the my.cnf File for MySQL Replication

Table 6-3 lists the my.cnf file configuration settings to configure two systems as a geographically redundant pair. This table continues using the geotest1 and geotest2 example systems.

Table 6-3 my.cnf Configuration Entries

Entry geotest1 my.cnf File Value geotest2 my.cnf File Value Description

bind-address

Comment out

Comment out

This configuration requires that all servers listen to an IP address that is reachable by all peers. Commenting out this line directs the system to listen to all interfaces.

server-id

server-id=1

server-id=2

This assigns MySQL server names to our example systems.

binlog-do-db

ocsg60_geosite1

ocsg60_geosite2

Each site is both a master and slave. This entry directs each master to log database statements to themselves.

log_bin

/var/lib/mysql/mysql-bin.log

/var/lib/mysql/mysql-bin.log

Specifies the bin log location. Use the same value for both systems.

replicate-rewrite-db

ocsg60_geosite2->ocsg60_geosite1

ocsg60_geosite1->ocsg60_geosite2

This entry directs the slave domain to receive statements from the master domain. You can omit this entry if you use the same database name on both systems.

replicate-do-table

See Example 6-1 for the list of tables to replicate.

See Example 6-2 for a list of the tables to replicate.

NA


Merging the Data Tables

Once you have merged the tables, avoid writing any data to them until you have completed configuration.

Restarting the MySQL Servers

Restart both MySQL servers to make you changes take effect with this command:

sudo service mysql restart

Connecting the Master and Slave Databases

You must connect the databases in your geographically redundant systems with the instructions in this section:

  1. Create a user called replicator on each system to do the replication, and grant them rights with these commands:

    mysql> create user 'replicator'@'%' identified by 'password';
    mysql> grant replication slave on *.* to 'replicator'@'%';
    

    See the MySQL create user and grant commands for details.

  2. Obtain each master's File and Position to connect to. Run these commands on each system:

    mysql -u root -p -h geotest1
    mysql> show master status
    

    Record the File and Position for each system. The output will look something like this:

    geotest1:

    +------------------+----------+-----------------+------------------+
    | File             | Position | Binlog_Do_DB    | Binlog_Ignore_DB |
    +------------------+----------+-----------------+------------------+
    | mysql-bin.000002 | 85527658 | ocsg60_geositea |                  |
    +------------------+----------+-----------------+------------------+
    1 row in set (0.03 sec)
    

    geotest2:

    +------------------+----------+-----------------+------------------+
    | File             | Position | Binlog_Do_DB    | Binlog_Ignore_DB |
    +------------------+----------+-----------------+------------------+
    | mysql-bin.000006 | 12927236 | ocsg60_geositeb |                  |
    +------------------+----------+-----------------+------------------+
    1 row in set (0.07 sec)
    
  3. Run these commands on all systems to connect slaves to the masters:

    On geotest1:

    mysql -u root -p -h ubuntu
    slave stop; 
    CHANGE MASTER TO MASTER_HOST = '10.161.159.166', MASTER_USER = 'replicator', 
    MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000006', MASTER_LOG_POS = 12927236; 
    slave start; 
    

    On geotest2:

    mysql -u root -p -h centos
    slave stop; 
    CHANGE MASTER TO MASTER_HOST = '10.161.159.189', MASTER_USER = 'replicator', 
    MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000002', MASTER_LOG_POS = 85527658; 
    slave start; 
    

The MySQL databases are now connected and replicated. You can confirm the replication by sending an MT message to geotest1, then confirm that geotest2 received the delivery report. Have geotest1 px21 SMS plug-in binding transmitter only, and geotest2 px21 SMS plug-in binding receiver only.

Removing Geographic Redundancy

You can remove geographic redundancy from domains by using the Administration Console, programmatically, or by using a different MBean editor.

Removing the GeoRedundant Service from the Administration Console

To remove geographic redundancy from domains, and return them to a standalone configuration, repeat these steps on every geographically redundant domain.

To Remove Geographic Redundancy from a Domain: 

  1. Start the Administration Console.

  2. In the Change Center, click Lock & Edit.

  3. In the Domain Structure, navigate toe domain_name, OCSG, then server_name, the Container Services, then GeoRedundantService.

  4. In the Attributes tab, empty the text field opposite the GeoSiteId attribute.

  5. Navigate to Operations.

  6. Select RemoveSite from the Select an Operation menu.

  7. Click Invoke.

  8. In the Change Center, click Release Configuration.

Removing the GeoRedundant Service Programmatically

This section explains how to remove georedundant services from domains programmatically using the WebLogic Scripting Tool (WLST), or a different MBean editor.

Access the GeoRedundantServiceMBean and change its settings to reconfigure the domains to act as standalone domains.

Note:

To remove geographic redundancy from domains, and return them to a standalone configuration, repeat these steps on every geographically redundant domain.

Retrieve a list of the Remote Sites in the Domain. 

  1. Retrieve the list of all the remote sites in the domain.

    To do so, call the listRemoteSites() method of the GeoRedundantServiceMBean Mbean. This method returns the names of all the sites registered in the domain.

    For information about GeoRedundantServiceMBean, see ”All Classes” section of the OAM Java API Reference.

Remove each remote site in the list, one at a time. 

  1. Remove the configuration for the remote site.

    To do so, call the removeSite() method of the GeoRedundantServiceMBean Mbean. This method takes the remote site name as a parameter.

  2. Set the local site ID to a blank string.

    To do so, call the setGeoSiteId(String siteName) method of the GeoRedundantServiceMBean Mbean. When you call this method, place a blank string as the value of the input parameter, siteName.

  3. Set the master site ID to a blank string.

    To do so, call the setGeoMasterSiteId(String geoMasterSiteId) method of the GeoStorageServiceMBean Mbean. When you call this method, place a blank string as the value of the input parameter, geoMasterSiteId.

    For information about GeoStorageServiceMBean, see ”All Classes” section of the OAM Java API Reference.

Troubleshooting

At times, when you attempt to revert from a georedundant configuration to a standalone domain configuration, some of the georedundancy configuration settings seem to persist in your domain database.

If this scenario occurs, you can remove these georedundancy configuration settings from the database.

Warning!:

The following steps are for testing purposes only. They are provided here to help troubleshoot such issues in a non-production environment.

Do not attempt any deletion of configuration settings from the database in your production environment.

To remove these settings from your testing or non-production database:

  1. Stop the servers.

  2. Do the following for each georedundant domain:

    1. To delete the settings for each georedundant domain, enter the following SQL commands:

      DELETE FROM wlng_configuration WHERE instance = 'GeoStorageService';
      DELETE FROM wlng_configuration WHERE instance = 'GeoRedundantService';
      DELETE FROM wlng_blob_configuration WHERE instance = 'GeoRedundantService';
      
  3. Repeat step 2 for each domain.

  4. Verify that you have removed the settings from the domains in your non-production environment.

  5. Restart the servers.

GeoStorageService Reference

Set field values and use methods from the Administration Console by selecting Container, then Services followed by GeoStorageService. Alternately, use a Java application. For information on the methods and fields of the supported MBeans, see the "All Classes" section of Services Gatekeeper OAM Java API Reference.

GeoRedundantService Reference

Set field values and use methods from the Administration Console by selecting Container, then Services, and then GeoRedundantService. Alternately, use a Java application. For information on the methods and fields of the supported MBeans, see the "All Classes" section of Services Gatekeeper OAM Java API Reference.