JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Geographic Edition System Administration Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering the Geographic Edition Software

2.  Before You Begin

3.  Administering the Geographic Edition Infrastructure

4.  Administering Access and Security

5.  Administering Cluster Partnerships

6.  Administering Heartbeats

7.  Administering Protection Groups

8.  Monitoring and Validating the Geographic Edition Software

9.  Customizing Switchover and Takeover Actions

10.  Script-Based Plug-Ins

A.  Standard Geographic Edition Properties

B.  Legal Names and Values of Geographic Edition Entities

C.  Disaster Recovery Administration Example

D.  Takeover Postconditions

E.  Troubleshooting Geographic Edition Software

F.  Deployment Example: Replicating Data With MySQL

Overview of MySQL Replication

MySQL Database Resource Group

MySQL Replication Resource Group

MySQL Application Resource Group

Initial Configuration of MySQL Replication

Installing MySQL and Configuring the MySQL Database Resource Group

How to Configure the MySQL Replication

Configuring the MySQL Application Resource Group

Administering MySQL Protection Groups

Planning for Your MySQL Protection Group

Creating, Modifying, Validating, and Deleting a MySQL Protection Group

How to Create the MySQL Configuration

Modifying a MySQL Protection Group

Validating a MySQL Protection Group

Data Replication Layer Process for Validating the Application Resource Groups and Data Replication Entities

How to Delete a MySQL Protection Group

Administering MySQL Application Resource Groups

How to Add an Application Resource Group to a MySQL Protection Group

How to Delete an Application Resource Group From a MySQL Protection Group

Administering MySQL Data-Replicated Components

How to Add a Data-Replicated Component to a MySQL Protection Group

Data Replication Subsystem Process for Verifying the Replicated Component

How to Modify a MySQL Data-Replicated Component

How to Delete a Data-Replicated Component From a MySQL Protection Group

Replicating a MySQL Protection Group Configuration to a Partner Cluster

Activating and Deactivating a MySQL Protection Group

Activating a MySQL Protection Group

Deactivating a MySQL Protection Group

Resynchronizing a MySQL Protection Group

Recovery Strategy After a Takeover of a MySQL Protection Group

How to Recover After a Takeover

G.  Error Return Codes for Script-Based Plug-Ins

Index

How to Configure the MySQL Replication

Before You Begin

Before you can configure the replication, you must decide which cluster will contain the master database at the first start.

  1. Prevent the startup of the slave threads.

    On the master cluster at the node where the MySQL database is active, add the skip-slave-start keyword to the my.cnf file. For example:

    cl1-node1 # echo skip-slave-start >> /mysql-data-directory/my.cnf
  2. Prevent non-superuser modifications.

    On the slave cluster at the node where the MySQL database is active, add the read-only=true directive to the my.cnf file, and restart your database.

    For example:

    cl2-node1 # echo read-only=true >> /mysql-data-directory/my.cnf
    cl2-node1 # clresource disable mys-rs
    cl2-node1 # clresource enable mys-rs
  3. Create the replication user on both databases.
    1. On each cluster, pick the node where the MySQL database is active, and connect as an administrative user who can at least create users.
    2. Create the replication user, and stay connected.

      Note - Be sure to create the replication with permissions to connect from any node.


      • The following example assumes that the MySQL database on the primary cluster listens to the socket /tmp/nyc.sock:

        cl1-node1:/ # /usr/local/mysql/bin/mysql -S /tmp/nyc.sock -uroot -proot
        mysql> use mysql
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl1-node1' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl1-node2' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl2-node3' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl2-node4' identified by 'repl';
      • The following example assumes that the MySQL database on the secondary cluster listens to the socket /tmp/sfo.sock:

        cl2-node3:/ # /usr/local/mysql/bin/mysql -S /tmp/sfo.sock -uroot -proot
        mysql> use mysql
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl1-node1' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl1-node2' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl2-node3' identified by 'repl';
        mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'cl2-node4' identified by 'repl';
  4. Establish the replication between the secondary and primary clusters.
    1. On the primary cluster, issue the following on the MySQL client:
      mysql> FLUSH TABLES WITH READ LOCK;
      mysql> show master status;
      | File           | Position | Binlog_Do_DB | Binlog_Ignore_DB  |
      
      | bin-log.000002 |     1424 |              | sc3_test_database |
      
      1 row in set (0.03 sec)
      
      mysql> unlock tables;

      Note the values for file and position. In the preceding example, they are bin-log.000002 and 1424, respectively.

    2. On the MySQL client on the slave, issue the following commands:
      mysql> change master to master_host='nyc',
      
          -> master_user='repl',
      
          -> master_password='repl',
      
          -> master_log_file='bin-log.000002',
      
          -> master_log_pos=1424;
      
      Query OK, 0 rows affected (0.04 sec)
      
      mysql> start slave;
      
      Query OK, 0 rows affected (0.03 sec)
    3. Check the slave status.
      mysql> show slave status;
      …
      Check for the following messages:
                  Slave_IO_State: Waiting for master to send event
                  Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      …
    4. Stop the slave.
      mysql> stop slave;
  5. Configure the reverse replication to prepare the two clusters for a role swap.
    1. On the secondary cluster, issue the following commands:
      mysql> FLUSH TABLES WITH READ LOCK;
      Query OK, 0 rows affected (0.01 sec)
      mysql> show master status;
      
      | File           | Position | Binlog_Do_DB | Binlog_Ignore_DB  |
      
      | bin-log.000020 |     1162 |              | sc3_test_database |
      
      1 row in set (0.00 sec)
      
      mysql> unlock tables;

      Note the values for file and position. In the preceding example, they are bin-log.000020 and 1162, respectively.

    2. On the MySQL client on the primary cluster, issue the following commands:
      mysql> change master to master_host='sfo',
      
          -> master_user='repl',
      
          -> master_password='repl',
      
          -> master_log_file='bin-log.000020',
      
          -> master_log_pos=1162;
      
      mysql> start slave;
      
      Query OK, 0 rows affected (0.03 sec)
    3. Check the slave status.
      mysql> show slave status;
      …
      Check for the following messages:
                  Slave_IO_State: Waiting for master to send event
                  Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      …
    4. Stop the slave, and exit the MySQL client.
      mysql> stop slave;
      mysql> exit;
    5. On the MySQL client on the secondary cluster, start the slave, and exit the client.
      mysql> start slave;
      mysql> exit;

Configuring the MySQL Application Resource Group

At a minimum, you must create a resource group that contains a logical-host resource. You must leave the resource group in an unmanaged state.

For example, assume the following configuration:

On cluster nyc, you would issue the following commands:

cl1-node1 # clresourcegroup create usa-rg
cl1-node1 # clresourcegroup set -p Auto_start_on_new_cluster=false  usa-rg
cl1-node1 # clresourcegroup set -p RG_Affinities=+++nyc-rg  usa-rg
cl1-node1 # clreslogicalhostname create -g usa-rg usa
cl1-node1 # clresource enable usa

On cluster sfo, you would issue the following commands:

cl2-node1 # clresourcegroup create usa-rg
cl2-node1 # clresourcegroup set -p Auto_start_on_new_cluster=false  usa-rg
cl2-node1 # clresourcegroup set -p RG_Affinities=+++sfo-rg  usa-rg
cl2-node1 # clreslogicalhostname create -g usa-rg usa
cl2-node1 # clresource enable usa