JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for MySQL Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for MySQL

Installing and Configuring HA for MySQL

HA for MySQL Overview

Planning the HA for MySQL Installation and Configuration

MySQL and Solaris Containers

Configuration Restrictions

Restriction for the HA for MySQL Data Service Configuration

Restrictions on the MySQL Configuration File

Restrictions for the MySQL Configurations

Restrictions on the MySQL Database Directory

Restriction for the MySQL smf Service Name in an HA Container

Examples for the File Systems Layout

Configuration Requirements

Components and Their Dependencies for HA for MySQL

Registration and Configuration File for HA for MySQL

Explanation of the my.cnf File

Installing and Configuring MySQL

Enabling MySQL to Run in a Global Zone Configuration

How to Enable MySQL for a Failover Resource

How to Enable MySQL to Run in a Scalable Configuration

How to Enable MySQL to Run in a Multiple-Master Configuration

How to Install and Configure MySQL in a Global Zone

Enabling MySQL to Run in a Zone Configuration

How to Enable MySQL for a Failover Resource

How to Enable MySQL to Run in a Scalable Configuration

How to Enable MySQL to Run in a Multiple-Master Configuration

How to Install and Configure MySQL in a Zone

How to Enable MySQL to run in an HA Container Configuration

How to Install and Configure MySQL in an HA Container

Verifying the Installation and Configuration of MySQL

How to Verify the Installation and Configuration of MySQL

Installing the HA for MySQL Packages

How to Install the HA for MySQL Packages

Registering and Configuring HA for MySQL

How to Register and Configure HA for MySQL as a Failover Service in a Global Zone Configuration

How to Register and Configure HA for MySQL as a Failover Service in a Zone Configuration

How to Register and Configure HA for MySQL as a Failover Service in an HA Container Configuration

How to Modify Parameters in the HA for MySQL Manifest

How to Remove an HA for MySQL Resource From an HA Container

How to Add an HA for MySQL Resource in a Scalable or Multiple-Master Configuration

Verifying the HA for MySQL Installation and Configuration

How to Verify the HA for MySQL Installation and Configuration

Understanding the HA for MySQL Fault Monitor

Resource Properties

Probing Algorithm and Functionality

Debugging the HA for MySQL

How to Activate Debugging for HA for MySQL

Upgrade to Oracle Solaris Cluster 3.3 When Using HA for MySQL

Upgrade From Sun Cluster 3.1 8/05 to Oracle Solaris Cluster 3.3 When Using HA for MySQL

Upgrade to MySQL 4.x.x From 3.23.54 When Using HA for MySQL

Upgrade to MySQL Version 4.x.x From Version 3.23.54

A.  Deployment Example: Installing MySQL in the Global Zone

B.  Deployment Example: Installing MySQL in the Non-Global HA Container

C.  Deployment Example: Installing MySQL in a Non-Global Zone

D.  Deployment Example: Installing MySQL in a Scalable or Multiple-Master Configuration

Index

Upgrade to Oracle Solaris Cluster 3.3 When Using HA for MySQL

Use the information in this section to understand how to upgrade to Oracle Solaris Cluster 3.3 when using HA for MySQL.


Note - This procedure will not describe how to upgrade to Oracle Solaris Cluster 3.3. It includes only the steps to upgrade HA for MySQL to Oracle Solaris Cluster 3.3.


Upgrade From Sun Cluster 3.1 8/05 to Oracle Solaris Cluster 3.3 When Using HA for MySQL

Use this procedure to upgrade HA for MySQL from Sun Cluster 3.1 8/05 to Oracle Solaris Cluster3.3.

  1. Shut down the HA for MySQL resource.
    # clresource disableMySQL-resource
  2. Upgrade the nodes to Oracle Solaris Cluster 3.3 according to Oracle Solaris Cluster Upgrade Guide.
  3. Start the MySQL Server manually on Oracle Solaris Cluster 3.3.
    #cd MySQL-Base-directory
    
    # ./bin/mysqld --defaults-file=MySQL-Database-directory/my.cnf \
    --basedir=MySQL-Base-directory \
     --datadir=MySQL-Database-directory \
     --user=mysql \
     --pid-file=MySQL-Database-directory/mysqld.pid &
  4. Access the MySQL instance from the local node with the socket option.
    # cd MySQL-Base-directory
    /bin/mysql -S MySQL-socket -uroot \
     -pAdminpassword \

    The following is an example for a MySQL instance.

    # mysql -s /tmp/hahostix1.sock -uroot -proot mysql> 
  5. Drop the HA for MySQL test database sc3_test_database.
    # mysql -s /tmp/hahostix1.sock -uroot -proot
    mysql> DROP DATABASE sc3_test_database;
    Query OK, 0 rows affected (0.03 sec)
  6. Delete all entries in db-table in mysql-database that contain User='MySQL-Faultmonitor-user'.
    # mysql -s /tmp/hahostix1.sock -uroot -proot mysql> use mysql; Database changed mysql> DELETE FROM db WHERE User='fmuser'; Query OK, 1 row affected (0.03 sec)
  7. Delete all entries in user-table in mysql-database that contain User='<MySQL Faultmonitor user>'.
    # mysql -s /tmp/hahostix1.sock -uroot -proot mysql> use mysql; Database changed mysql> DELETE FROM user WHERE User='fmuser'; Query OK, 1 row affected (0.03 sec)
  8. Add a fault-monitor user and a test database to MySQL.
    1. Navigate to the util directory.
      # cd /opt/SUNWscmys/util
    2. Copy the mysql_config to home-dir and edit the mysql_config file and follow the comments within that file.
      # Where is MySQL installed (BASEDIR)
      MYSQL_BASE=
      
      # MySQL admin-user for localhost (Should be root)
      MYSQL_USER=
      
      # Password for MySQL admin user
      MYSQL_PASSWD=
      
      # Configured logicalhost
      MYSQL_HOST=
      
      # Specify a username for a faultmonitor user
      FMUSER=
      
      # Pick a password for that faultmonitor user
      FMPASS=
      
      # Socket name for mysqld ( Should be /tmp/<Logical host>.sock )
      MYSQL_SOCK=
      
      # Specify the physical hostname for the physical NIC that this logicalhostname 
      # belongs to for every node in the cluster this resource group can be located on.
      # If you use the mysql_geocontrol features to implement the MySQL replication as 
      # the replication protocol in Oracle Solaris Cluster geographic edition, specify all
      # physical nodes of all clusters. Specify at least all the nodes on both sites
      # where the MySQL databases can be hosted.
      # IE: The logicalhost lh1 belongs to hme1 for physical-node phys-1 and
      # hme3 for  physical-node phys-2. The hostname for hme1 is phys-1-hme0 and
      # for hme3 on phys-2 it is phys-2-hme3.
      # IE: MYSQL_NIC_HOSTNAME="phys-1-hme0 phys-2-hme3"
      # IE: If two clusters are tied together by the mysql_geocontrol features, assuming the
      # mysql database on cluster one belongs to cl1-phys1-hme0 and cl1-phys2-hme3, the
      # mysql database on cluster two belongs to cl2-phys1-hme2 and cl2-phys2-hme4. Then the  
      # MYSQL_NIC_HOSTNAME variable needs to be set to:
      # MYSQL_NIC_HOSTNAME="cl1-phys1-hme0 cl1-phys2-hme3 cl2-phys1-hme2 cl2-phys2-hme4"
      
      MYSQL_NIC_HOSTNAME=
      
      # Where are your databases installed? (location of my.cnf)
      MYSQL_DATADIR=
      
      # Is MySQL Cluster installed?
      # Any entry here triggers the ndb engine check preparation. If no MySQL cluster should be
      # checked, leave it empty.
      NDB_CHECK=
      
      If you want to monitor the ndb tables of a MySQL cluster set NDB_CHECK to yes.

    The following is an example for a MySQL instance.

    MYSQL_BASE=/global/mysql
    MYSQL_USER=root
    MYSQL_PASSWD=root
    MYSQL_HOST=hahostix1
    FMUSER=fmuser
    FMPASS=fmuser
    MYSQL_SOCK=/tmp/hahostix1.sock
    MYSQL_NIC_HOSTNAME="clusterix1 clusterix2"
    MYSQL_DATADIR=/global/mysql-data
    NDB_CHECK=
  9. After editing mysql_config, run the mysql_register script.
    # ./mysql_register -f home-dir/mysql_config
  10. Stop the MySQL Server manually.
    # kill -TERM `cat MySQL-Database-directory/mysqld.pid
  11. Start the HA for MySQL resource.
    # clresource enable MySQL-resource