Sun Cluster Data Service for MySQL Guide for Solaris OS

Appendix A Deployment Example: Installing MySQL in the Global Zone

This appendix presents a complete example of how to install and configure the MySQL application and data service in the global zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of MySQL installation in a non-global zone, see Appendix B, Deployment Example: Installing MySQL in the Non — Global Failover Zone or Appendix C, Deployment Example: Installing MySQL in a Non-Global Zone according to your zone type.

Target Cluster Configuration

This example uses a two-node cluster with the following node names:

This configuration also uses the logical host name ha-host-1.

Software Configuration

This deployment example uses the following software products and versions:

This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.

Note –

The steps for installing MySQL in a cluster that runs on Solaris version 9 OS are identical to the steps in this example.


The instructions in this example were developed with the following assumptions:

Installing and Configuring MySQL on Local Storage in the Global Zone

The tasks you must perform to install and configure MySQL in the global zone are as follows:

ProcedureExample: Preparing the Cluster for MySQL

  1. Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for MySQL

  2. Beginning on the node that owns the file system, add the mysql user.

    phys-schost-1# groupadd -g 1000 mysql
    phys-schost-1# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    phys-schost-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql

ProcedureExample: Configuring Cluster Resources for MySQL

  1. Register the necessary data types on both nodes.

    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
  2. Create the MySQL resource group.

    phys-schost-1# clresourcegroup create RG-MYS
  3. Create the logical host.

    phys-schost-1# clreslogicalhostname create -g RG-MYS ha-host-1
  4. Create the HAStoragePlus resource in the RG-MYS resource group.

    phys-schost-1# clresource create -g RG-MYS -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-MYS-HAS
  5. Enable the resource group.

    phys-schost-1# clresourcegroup online -M RG-MYS

ProcedureExample: Installing and bootstrapping the MySQL Software on Local Storage

These steps illustrate how to install the MySQL software in the default directory /usr/local/mysql. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Install the MySQL binaries on both nodes.

    phys-schost-1# cd /usr/local
    phys-schost-1# tar xvf mysql-max-5.0.22-solaris10-architcture_64.tar.gz
    phys-schost-1# ln -s mysql-max-5.0.22-solaris10-architcture_64 mysql
  2. Change the ownership of the MySQL binaries on both nodes.

    phys-schost-1# chown -R mysql:mysql /usr/local/mysql
  3. Create your data base directories.

    phys-schost-1# mkdir -p /global/mnt3/mysql-data/logs
    phys-schost-1# mkdir /global/mnt3/mysql-data/innodb
    phys-schost-1# mkdir /global/mnt3/mysql-data/BDB
  4. Bootstrap MySQL.

    phys-schost-1# cd /usr/local/mysql
    phys-schost-1# ./scripts/* --datadir=/global/mnt3/mysql-data
  5. Create your my.cnf config-file in /global/mnt3/mysql-data

    phys-schost-1# cat > /global/mnt3/mysql-data/my.cnf << EOF
    # is the address of the logical host
    bind-address= # this is the address of the logical host
    # Innodb
    innodb_data_home_dir = /global/mnt3/mysql-data/innodb
    innodb_data_file_path = ibdata1:10M:autoextend
    innodb_log_group_home_dir = /global/mnt3/mysql-data/innodb
    innodb_log_arch_dir = /global/mnt3/mysql-data/innodb
    # You can set .._buffer_pool_size up to 50 - 80 %
    # of RAM but beware of setting memory usage too high
    set-variable = innodb_buffer_pool_size=50M
    set-variable = innodb_additional_mem_pool_size=20M
    # Set .._log_file_size to 25 % of buffer pool size
    set-variable = innodb_log_file_size=12M
    set-variable = innodb_log_buffer_size=4M
    set-variable = innodb_lock_wait_timeout=50
    # BDB
    # uncomment the skip-bdb if you used a binary download.
    # binary downloads come very often without the bdb support.
    # Replikering Slave
    # MySQL 4.x
  6. Change the ownership of the MySQL data directory.

    phys-schost-1# chown -R mysql:mysql /global/mnt3/mysql-data
  7. Change the permission of the my.cnf file.

    phys-schost-1# chmod 644 /global/mnt3/mysql-data/my.cnf

ProcedureExample: Modifying the MySQL Configuration Files

  1. Copy the MySQL configuration file from the agent directory to its deployment location.

    phys-schost-1# cp /opt/SUNWscmys/util/mysql_config /global/mnt3
    phys-schost-1# cp /opt/SUNWscmys/util/ha_mysql_config /global/mnt3
  2. Add this cluster's information to the mysql_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.

    MYSQL_NIC_HOSTNAME="phys-schost-1 phys-schost-2"
  3. Add this cluster's information to the ha_mysql_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.

  4. Save and close the file.

ProcedureExample: Enabling the MySQL Software to Run in the Cluster

  1. Start the MySQL database manually on the node where the resource group is online.

    phys-schost-1# cd /usr/local/mysql
    phys-schost-1# ./bin/mysqld --defaults-file=/global/mnt3/mysql-data/my.cnf \
    --basedir=/usr/local/mysql --datadir=/global/mnt3/mysql-data \
    --pid-file=/global/mnt3/mysql-data/ \
    --user=mysql >> /global/mnt3/mysql-data/logs/ha-host-1.log 2>&1 &
  2. Set the password for localhost in MySQL to root.

    phys-schost-1# /usr/local/mysql/bin/mysqladmin -S /tmp/ha-host-1.sock -uroot \
    password 'root'
  3. Add an admin user in the MySQL database for the logical host.

    phys-schost-1# /usr/local/mysql/bin/mysql -S /tmp/ha-host-1.sock -uroot -proot
    mysql> use mysql;
    mysql> GRANT ALL ON *.* TO 'root'@'phys-schost-1' IDENTIFIED BY 'root';
    mysql> GRANT ALL ON *.* TO 'root'@'phys-schost-2' IDENTIFIED BY 'root';
    mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='phys-schost-1';
    mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='phys-schost-2';
    mysql> exit
  4. Prepare the Sun Cluster specific test database.

    phys-schost-1# ksh /opt/SUNWscmys/util/mysql_register -f /global/mnt3/mysql_config
  5. Stop the MySQL database.

    phys-schost-1# kill -TERM `cat /global/mnt3/mysql-data/`
  6. Run the ha_mysql_register script to register the resource.

    phys-schost-1# ksh /opt/SUNWscmys/util/ha_mysql_register \
    -f /global/mnt3/ha_mysql_config
  7. Enable the resource.

    phys-schost-1# clresource enable RS-MYS