Configuring Cross-Site Database Replication

Prerequisites

This section details the procedure for configuring database replication between two instances of OCCNE.
  1. Two separate instances of OCCNE must be installed and running.
  2. Each site has 2 SQL nodes. The SQL nodes in each instance are assigned IDs 55 and 56.
  3. There is network connectivity between the 2 sites.

Procedures

In the following procedures, the following shorthand references will be used:

Site 1 = the first installed instance of OCCNE Site 2 = the second installed instance of OCCNE SQL node A = SQL node A at either site SQL node B = SQL node B at either site

Configure replication

Configure Site 2

This procedure will configure the SQL nodes for replication.

Table B-15 Configure replication

Step No# Procedure Description
1.

Configure SQL node A At Site 1, login to any management node. From the MySQL Cluster Manager Client execute:
mcm> set server-id:mysqld:56=11 occnendbclustera;
mcm> set binlog-format:mysqld:56=row occnendbclustera;
mcm> set log_bin:mysqld:56=/var/occnedb/binlogs/mysql-bin.log occnendbclustera;
mcm> set relay_log:mysqld:56=/var/occnedb/mysql/mysql-relay-bin occnendbclustera;
mcm> set relay_log_index:mysqld:56=/var/occnedb/mysql/mysql-relay-bin.index occnendbclustera;
mcm> set expire_logs_days:mysqld:56=10 occnendbclustera;
mcm> set max_binlog_size:mysqld:56=1073741824 occnendbclustera;
mcm> set auto-increment-increment:mysqld:56=2 occnendbclustera;
mcm> set auto-increment-offset:mysqld:56=2 occnendbclustera;
mcm> set ndb-log-update-as-write:mysqld:56=0 occnendbclustera;
mcm> set slave-skip-errors:mysqld:56=1590 occnendbclustera;
mcm> set skip-slave-start:mysqld:56=TRUE occnendbclustera;
2.

Configure SQL node B At Site 1, login to any management node. From the MySQL Cluster Manager Client execute:
mcm> set server-id:mysqld:57=12 occnendbclustera;

The following steps are performed at Site 2.

Table B-16 Configure Site 2

Step No# Procedure Description
1.

  1. Configure SQL node A
    At Site 2, login to any management node. From the MySQL Cluster Manager Client execute:
    mcm> set server-id:mysqld:56=21 occnendbclustera;
    mcm> set binlog-format:mysqld:56=row occnendbclustera;
    mcm> set log_bin:mysqld:56=/var/occnedb/binlogs/mysql-bin.log occnendbclustera;
    mcm> set relay_log:mysqld:56=/var/occnedb/mysql/mysql-relay-bin occnendbclustera;
    mcm> set relay_log_index:mysqld:56=/var/occnedb/mysql/mysql-relay-bin.index occnendbclustera;
    mcm> set expire_logs_days:mysqld:56=10 occnendbclustera;
    mcm> set max_binlog_size:mysqld:56=1073741824 occnendbclustera;
    mcm> set auto-increment-increment:mysqld:56=2 occnendbclustera;
    mcm> set auto-increment-offset:mysqld:56=2 occnendbclustera;
    mcm> set ndb-log-update-as-write:mysqld:56=0 occnendbclustera;
    mcm> set slave-skip-errors:mysqld:56=1590 occnendbclustera;
    mcm> set skip-slave-start:mysqld:56=TRUE occnendbclustera;
  2. Configure SQL node B
    At Site 2, login to any management node. From the MySQL Cluster Manager Client execute:
    mcm> set server-id:mysqld:57=12 occnendbclustera;
2.

Restart all SQL nodes
  1. Restart SQL nodes on Site 1
    At Site 1, login to any management node. From the MySQL Cluster Manager client execute:
    mcm> stop process 56 occnendbclustera;
    mcm> start process 56 occnendbclustera;
    mcm> stop process 57 occnendbclustera;
    mcm> start process 57 occnendbclustera;
  2. Restart SQL nodes on Site 2

    At Site 2, login to any management node. From the MySQL Cluster Manager client execute:

    mcm> stop process 56 occnendbclustera;
    mcm> start process 56 occnendbclustera;
    mcm> stop process 57 occnendbclustera;
    mcm> start process 57 occnendbclustera;
3.

Create replication users
  1. Create replication user at Site 1
    At Site 1, login to SQL node A. From the MySQL client execute:
    mysql> /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p<password>
    mysql> GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'<public IP of SQL node A at Site 2>' IDENTIFIED BY '<password>';
  2. Create replication use at Site 2
    At Site 2, login to SQL node A. From the MySQL client execute:
    mysql> /usr/local/mysql/bin/mysql -h 127.0.0.1 -uroot -p<password>
    mysql> GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'<IP of SQL node A at Site 1>' IDENTIFIED BY '<password>';
4.

Initialize replication
  1. Get replication information for Site 2
    At Site 2, login to SQL node A. From the MySQL client execute:
    mysql> SHOW MASTER STATUS;
    You will see output similar to the following:
    +------------------+----------+--------------+------------------+-------------------+
    
    | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
    
    +------------------+----------+--------------+------------------+-------------------+
    
    | mysql-bin.000003 | 154 | | | |
    
    +------------------+----------+--------------+------------------+-------------------+
    
    1 row in set (0.00 sec)

    Note: This is just example output; the values you see will be different.

  2. Initialize replication at Site 1
    At Site 1, login to SQL node A. From the MySQL client execute:
    mysql> STOP SLAVE;
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> CHANGE MASTER TO master_host='<IP of SQL node A at Site 2>', master_port=3306, master_user='myslave', master_password='<password>', master_log_file='<binlog file>', master_log_pos=<binlog position>;
    Query OK, 0 rows affected, 1 warning (0.05 sec)
    
    mysql> START SLAVE;
    Query OK, 0 rows affected (0.00 sec)
    Where <binlog_file> is the value of the File column in the output shown above, and <binlog_position> is the value of the Position column in the output shown above.
  3. Get replication information for Site 1
    At Site 1, login to SQL node A. From the MySQL client execute:
    mysql> SHOW MASTER STATUS;
    You will see output similar to the following:
    +------------------+----------+--------------+------------------+-------------------+
    
    | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
    
    +------------------+----------+--------------+------------------+-------------------+
    
    | mysql-bin.000001 | 580 | | | |
    
    +------------------+----------+--------------+------------------+-------------------+
    
    1 row in set (0.00 sec)

    Note: This is just example output; the values you see will be different.

  4. Initialize replication at Site 2
    At Site 2, login to SQL node A. From the MySQL client execute:
    mysql> STOP SLAVE;
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> CHANGE MASTER TO master_host='<IP of SQL node A at Site 1>', master_port=3306, master_user='myslave', master_password='<password>', master_log_file='<binlog file>', master_log_pos=<binlog position>;
    Query OK, 0 rows affected, 1 warning (0.05 sec)
    
    mysql> START SLAVE;
    Query OK, 0 rows affected (0.00 sec)
    Where <binlog_file> is the value of the File column in the output shown above, and <binlog_position> is the value of the Position column in the output shown above.
5.

Verify replication status
  1. Verify Site1
    At Site 1, login to SQL node A. From the MySQL client execute:
    mysql> SHOW SLAVE STATUS \G;
    *************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
    Master_Host: 10.75.212.246
    Master_User: myslave
    Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: mysql-bin.000001
    Read_Master_Log_Pos: 4175
    Relay_Log_File: mysql-relay-bin.000002
    Relay_Log_Pos: 3810
    Relay_Master_Log_File: mysql-bin.000001
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    Replicate_Do_DB:
    Replicate_Ignore_DB:
    Replicate_Do_Table:
    Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
    Replicate_Wild_Ignore_Table:
    Last_Errno: 0
    Last_Error:
    Skip_Counter: 0
    Exec_Master_Log_Pos: 4175
    Relay_Log_Space: 4017
    Until_Condition: None
    Until_Log_File:
    Until_Log_Pos: 0
    Master_SSL_Allowed: No
    Master_SSL_CA_File:
    Master_SSL_CA_Path:
    Master_SSL_Cert:
    Master_SSL_Cipher:
    Master_SSL_Key:
    Seconds_Behind_Master: 0
    Master_SSL_Verify_Server_Cert: No
    Last_IO_Errno: 0
    Last_IO_Error:
    Last_SQL_Errno: 0
    Last_SQL_Error:
    Replicate_Ignore_Server_Ids:
    Master_Server_Id: 2155
    Master_UUID: 41ea3e4f-0a6f-11e9-8d7a-fa163fe70331
    Master_Info_File: /usr/local/mysql/data/master.info
    SQL_Delay: 0
    SQL_Remaining_Delay: NULL
    Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
    Master_Retry_Count: 86400
    Master_Bind:
    Last_IO_Error_Timestamp:
    Last_SQL_Error_Timestamp:
    Master_SSL_Crl:
    Master_SSL_Crlpath:
    Retrieved_Gtid_Set:
    Executed_Gtid_Set:
    Auto_Position: 0
    Replicate_Rewrite_DB:
    Channel_Name:
    Master_TLS_Version:
    1 row in set (0.00 sec)
    Verify that the Slave_IO_Running and Slave_SQL_Running fields both have the value "Yes".
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    Verify that the Last_IO_Errno and Last_SQL_Errno fields both have the value "Yes". Verify that the Last_IO_Error and Last_SQL_Error field values are both empty.
    Last_IO_Errno: 0
    Last_IO_Error:
    Last_SQL_Errno: 0
    Last_SQL_Error:
    If the values of fields referenced above match the listed values, then replication is working properly at this site.
  2. Verify Site 2
    At Site 2, login to SQL node A. From the MySQL client execute:
    mysql> SHOW SLAVE STATUS \G;
    
    *************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
    Master_Host: 10.75.213.245
    Master_User: myslave
    Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: mysql-bin.000003
    Read_Master_Log_Pos: 154
    Relay_Log_File: mysql-relay-bin.000002
    Relay_Log_Pos: 320
    Relay_Master_Log_File: mysql-bin.000003
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    Replicate_Do_DB:
    Replicate_Ignore_DB:
    Replicate_Do_Table:
    Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
    Replicate_Wild_Ignore_Table:
    Last_Errno: 0
    Last_Error:
    Skip_Counter: 0
    Exec_Master_Log_Pos: 154
    Relay_Log_Space: 527
    Until_Condition: None
    Until_Log_File:
    Until_Log_Pos: 0
    Master_SSL_Allowed: No
    Master_SSL_CA_File:
    Master_SSL_CA_Path:
    Master_SSL_Cert:
    Master_SSL_Cipher:
    Master_SSL_Key:
    Seconds_Behind_Master: 0
    Master_SSL_Verify_Server_Cert: No
    Last_IO_Errno: 0
    Last_IO_Error:
    Last_SQL_Errno: 0
    Last_SQL_Error:
    Replicate_Ignore_Server_Ids:
    Master_Server_Id: 1155
    Master_UUID: a9a78193-0a6e-11e9-9dbe-fa163fab3df6
    Master_Info_File: /usr/local/mysql/data/master.info
    SQL_Delay: 0
    SQL_Remaining_Delay: NULL
    Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
    Master_Retry_Count: 86400
    Master_Bind:
    Last_IO_Error_Timestamp:
    Last_SQL_Error_Timestamp:
    Master_SSL_Crl:
    Master_SSL_Crlpath:
    Retrieved_Gtid_Set:
    Executed_Gtid_Set:
    Auto_Position: 0
    Replicate_Rewrite_DB:
    Channel_Name:
    Master_TLS_Version:
    1 row in set (0.00 sec)
    Verify that the Slave_IO_Running and Slave_SQL_Running fields both have the value "Yes".
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    Verify that the Last_IO_Errno and Last_SQL_Errno fields both have the value "Yes". Verify that the Last_IO_Error and Last_SQL_Error field values are both empty.
    Last_IO_Errno: 0
    Last_IO_Error:
    Last_SQL_Errno: 0
    Last_SQL_Error:
    If the values of fields referenced above match the listed values, then replication is working properly at this site.