JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for MySQL Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for MySQL

A.  Deployment Example: Installing MySQL in the Global Zone

B.  Deployment Example: Installing MySQL in the Non-Global HA Container

C.  Deployment Example: Installing MySQL in a Non-Global Zone

D.  Deployment Example: Installing MySQL in a Scalable or Multiple-Master Configuration

Target Cluster Configuration

Software Configuration

Assumptions

Installing and Configuring MySQL on Local Storage in the Global Zone

Example: How to Prepare the Cluster for MySQL

Example: How to Configure MySQL in a Scalable Resource Group

Example: How to Configure MySQL in a Multiple--Master Resource Group

Example: How to Install and Bootstrap the MySQL Software on Local Storage

Example: How to Modify the MySQL Configuration Files

Example: How to Enable the MySQL Software to Run in the Cluster

Index

Example: How to Enable the MySQL Software to Run in the Cluster

  1. Start the MySQL database manually on both nodes.
    phys-schost-1# cd /usr/local/mysql 
    phys-schost-1# ./bin/mysqld --defaults-file=/local/mysql-data/my.cnf \ 
    --basedir=/usr/local/mysql --datadir=/local/mysql-data \ 
    --pid-file=/local/mysql-data/mysqld.pid \ 
    --user=mysql >> /local/mysql-data/logs/phys-schost-1.log 2>&1

    Note - Make sure to change phys-schost-1 to phys-schost-2 on the second node.


  2. Set the password for localhost in MySQL to root on both nodes.
    phys-schost-1# /usr/local/mysql/bin/mysqladmin -S /tmp/phys-schost-1.sock \
    -uroot password 'root'

    Note - Make sure to change phys-schost-1 to phys-schost-2 on the second node.


  3. Add an administrative user in the MySQL database for the physical host on both nodes.
    phys-schost-1# /usr/local/mysql/bin/mysql -S /tmp/phys-schost-1.sock -uroot -proot 
    mysql> use mysql; 
    mysql> GRANT ALL ON *.* TO 'root'@'phys-schost-1' IDENTIFIED BY 'root'; 
    mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='phys-schost-1'; 
    mysql> exit

    Note - Make sure to change phys-schost-1 to phys-schost-2 on the second node.


  4. Prepare the Oracle Solaris Cluster specific test database on both nodes.
    phys-schost-1# ksh /opt/SUNWscmys/util/mysql_register -f /local/mysql_config
  5. Stop the MySQL database.
    phys-schost-1# kill -TERM 'cat /local/mysql-data/mysqld.pid' 
  6. Make the /global/mnt3/ha_mysql_config file available on all nodes to run the MySQL database.
  7. Encrypt the fault monitor user password on all nodes to run the MySQL database.
    phys-schost-1# ksh /opt/SUNWscmys/util/ha_mysql_register \
    -f /global/mnt3/ha_mysql_config -e
    phys-schost-2# ksh /opt/SUNWscmys/util/ha_mysql_register \
    -f /global/mnt3/ha_mysql_config -e

    Enter the same fault monitor user password that you supplied in Step 4.

  8. Run the ha_mysql_register script to register the resource on one node.
     phys-schost-1# ksh /opt/SUNWscmys/util/ha_mysql_register \ -f /local/ha_mysql_config
  9. Enable the resource.
    phys-schost-1# clresource enable RS-MYSQL