Go to main content

Oracle® Solaris Cluster Data Replication Guide for MySQL

Exit Print View

Updated: August 2018
 
 

How to Configure the MySQL Replication

Before You Begin

Determine which cluster will contain the master database at the first start.

  1. Prevent the startup of the slave threads.

    On the primary cluster at the node where the MySQL database is active, add the skip-slave-start keyword to the my.cnf file. For example:

    cl1-node1# echo skip-slave-start >> /mysql-data-directory/my.cnf
  2. (Optional) Prevent non-root modifications.

    On the secondary cluster at the node where the MySQL database is active, add the read-only=true directive to the my.cnf file, and restart your database.


    Note -  If you perform this step, you must also set READONLY=true in the mysql_geo_config file that is used to create the MySQL protection group. That task is described in Step 2 of How to Create and Configure a MySQL Protection Group.

    For example:

    cl2-node1# echo read-only=true >> /mysql-data-directory/my.cnf
    cl2-node1# clresource disable newyork-mys-rs
    cl2-node1# clresource enable newyork-mys-rs
  3. Create the replication user on both databases.
    1. On each cluster, pick the node where the MySQL database is active, and connect as an administrative user who can at least create users.
    2. Create the replication user, and stay connected.

      Note -  Be sure to create the replication with permissions to connect from any node.
      • On the primary cluster, which listens to the socket /tmp/paris.sock, issue the following commands:
        cl1-node1:/ # /usr/local/mysql/bin/mysql -S /tmp/paris.sock -uroot -proot
        mysql> use mysql
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-paris-1' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-paris-2' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-newyork-1' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-newyork-2' identified by 'repl';
      • On the secondary cluster, which listens to the socket /tmp/newyork.sock:
        cl2-node3:/ # /usr/local/mysql/bin/mysql -S /tmp/newyork.sock -uroot -proot
        mysql> use mysql
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-paris-1' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-paris-2' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-newyork-1' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'phys-newyork-2' identified by 'repl';
  4. Establish the replication between the secondary and primary clusters.
    1. On the primary cluster, issue the following on the MySQL client:
      mysql> FLUSH TABLES WITH READ LOCK;
      mysql> show master status;
      | File           | Position | Binlog_Do_DB | Binlog_Ignore_DB  |
      
      | bin-log.000002 |     1424 |              | sc3_test_database |
      
      1 row in set (0.03 sec)
      
      mysql> unlock tables;

      Note the values for file and position. In the preceding example, they are bin-log.000002 and 1424, respectively.

    2. On the c client on the secondary cluster, issue the following commands:
      mysql> change master to master_host='cluster-paris',
      
      -> master_user='repl',
      
      -> master_password='repl',
      
      -> master_log_file='bin-log.000002',
      
      -> master_log_pos=1424;
      
      Query OK, 0 rows affected (0.04 sec)
      
      mysql> start slave;
      
      Query OK, 0 rows affected (0.03 sec)
    3. Check the slave status.
      mysql> show slave status\G
    4. Verify that the following status is reported in the output of the show slave status command.
      Slave_IO_State: Waiting for master to send event
      Slave_IO_Running: Yes
      Slave_SQL_Running: Yes

      Consult your MySQL documentation to troubleshoot any unexpected status.

  5. Configure the reverse replication to prepare the two clusters for a role swap.
    1. On the secondary cluster, issue the following commands:
      mysql> FLUSH TABLES WITH READ LOCK;
      Query OK, 0 rows affected (0.01 sec)
      mysql> show master status;
      
      | File           | Position | Binlog_Do_DB | Binlog_Ignore_DB  |
      
      | bin-log.000020 |     1162 |              | sc3_test_database |
      
      1 row in set (0.00 sec)
      
      mysql> unlock tables;

      Note the values for file and position. In the preceding example, they are bin-log.000020 and 1162, respectively.

    2. On the MySQL client on the primary cluster, issue the following commands:
      mysql> change master to master_host='cluster-newyork',
      
      -> master_user='repl',
      
      -> master_password='repl',
      
      -> master_log_file='bin-log.000020',
      
      -> master_log_pos=1162;
      
      mysql> start slave;
      
      Query OK, 0 rows affected (0.03 sec)
    3. Check the slave status.
      mysql> show slave status\G
    4. "Waiting for master to sent event" at Slave_IO_State check for the following message:
      Slave_IO_Running: Yes
      
      Slave_SQL_Running: Yes
    5. Stop the slave, and exit the MySQL client.
      mysql> stop slave;
      mysql> exit;
    6. On the MySQL client on the secondary cluster, start the slave, and exit the client.
      mysql> start slave;
      mysql> exit;