Sun Cluster Data Service for MySQL Guide for Solaris OS

Appendix B Deployment Example: Installing MySQL in the Non — Global Failover Zone

This appendix presents a complete example of how to install and configure the MySQL application and data service in a non — global failover zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of MySQL installation in a global zone, see Appendix A, Deployment Example: Installing MySQL in the Global Zone. For an installation in a zone, see Appendix C, Deployment Example: Installing MySQL in a Non-Global Zone.

Target Cluster Configuration

This example uses a two-node cluster with the following node names:

This configuration also uses the logical host name ha-host-1.

Software Configuration

This deployment example uses the following software products and versions:

This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.

Assumptions

The instructions in this example were developed with the following assumptions:

Installing and Configuring MySQL on Local Storage in the Failover Zone

The tasks you must perform to install and configure MySQL in the failover zone are as follows:

ProcedureExample: Preparing the Cluster for MySQL

  1. Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for MySQL

    • Sun Cluster data service for Solaris Containers

  2. Beginning on the node that owns the file system, add the mysql user.


    phys-schost-1# groupadd -g 1000 mysql
    phys-schost-1# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    phys-schost-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    

ProcedureExample: Configuring Cluster Resources for MySQL

  1. Register the necessary data types on one of the nodes.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the MySQL resource group.


    phys-schost-1# clresourcegroup create RG-MYS
    
  3. Create the HAStoragePlus resource in the RG-MYS resource group.


    phys-schost-1# clresource create -g RG-MYS -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-MYS-HAS
    
  4. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-MYS
    

ProcedureExample: Configuring the Failover Zone

  1. On shared cluster storage, create a directory for the failover zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /global/mnt3/zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/global/mnt3/zones/clu1
    set autoboot=false
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=ha-host-1
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="MySQL cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the failover zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1a# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    
  8. Copy the containers configuration file to a temporary location.


    phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config
    
  9. Edit the /tmp/sczbt_config file and set variable values as shown:


    RS=RS-MYS-ZONE
    RG=RG-MYS
    PARAMETERDIR=/global/mnt3/zonepar
    SC_NETWORK=false
    SC_LH=
    FAILOVER=true
    HAS_RS=RS-MYS-HAS
    
    
    Zonename=clu1
    Zonebootopt=
    Milestone=multi-user-server
    Mounts=
  10. Create the zone according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.

  11. Register the zone resource.


    phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config
    
  12. Enable the zone resource.


    phys-schost-1# clresource enable RS-MYS-ZONE
    

ProcedureExample: Installing and bootstrapping the MySQL Software on Local Storage

These steps illustrate how to install the MySQL software in the default directory /usr/local/MYSql. As long as only one node is mentioned it needs to be the node where your resource group is online. It is assumed, that you inherited /usr, so you can not write to /usr/local in the zones. If you linked /usr/local to a local directory, start at step 4

  1. Add the mysql user.


    phys-schost-1# groupadd -g 1000 mysql
    phys-schost-1# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    
  2. Install the MySQL binaries on both nodes.


    phys-schost-1# cd /usr/local
    phys-schost-1# tar xvf mysql-max-5.0.22-solaris10-architcture_64.tar.gz
    phys-schost-1# ln -s mysql-max-5.0.22-solaris10-x86_64 mysql
    
  3. Change the ownership of the MySQL binaries on both nodes.


    phys-schost-1# chown -R mysql:mysql /usr/local/mysql
    
  4. Log in to the zone.


    phys-schost-1# zlogin clu1
    
  5. Create the parent for the mysql home directory.


    zone# mkdir -p /global/mnt3
    
  6. Add the mysql user.


    zone# groupadd -g 1000 mysql
    zone# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    
  7. Create your data base directories.


    zone# mkdir -p /global/mnt3/mysql-data/logs
    zone# mkdir /global/mnt3/mysql-data/innodb
    zone# mkdir /global/mnt3/mysql-data/BDB
    
  8. Bootstrap MySQL.


    zone# cd /usr/local/mysql
    zone# ./scripts/* --datadir=/global/mnt3/mysql-data
    
  9. Create your my.cnf config-file in /global/mnt3/mysql-data


    zone# cat > /global/mnt3/mysql-data/my.cnf << EOF
    [mysqld]
    server-id=1
    #port=3306
    # 10.18.5.1 is the address of the logical host
    bind-address=10.18.5.1 # this is the address of the logical host
    socket=/tmp/ha-host-1.sock
    log=/global/mnt3/mysql-data/logs/log1
    log-bin=/global/mnt3/mysql-data/logs/bin-log
    binlog-ignore-db=sc3_test_database
    log-slow-queries=/global/mnt3/mysql-data/logs/log-slow-queries
    #log-update=/global/mnt3/mysql-data/logs/log-update
    
    # Innodb
    #skip-innodb
    innodb_data_home_dir = /global/mnt3/mysql-data/innodb
    innodb_data_file_path = ibdata1:10M:autoextend
    innodb_log_group_home_dir = /global/mnt3/mysql-data/innodb
    innodb_log_arch_dir = /global/mnt3/mysql-data/innodb
    # You can set .._buffer_pool_size up to 50 - 80 %
    # of RAM but beware of setting memory usage too high
    set-variable = innodb_buffer_pool_size=50M
    set-variable = innodb_additional_mem_pool_size=20M
    # Set .._log_file_size to 25 % of buffer pool size
    set-variable = innodb_log_file_size=12M
    set-variable = innodb_log_buffer_size=4M
    innodb_flush_log_at_trx_commit=1
    set-variable = innodb_lock_wait_timeout=50
    
    # BDB
    # uncomment the skip-bdb if you used a binary download.
    # binary downloads come very often without the bdb support.
    #skip-bdb
    bdb-home=/global/mnt3/mysql-data
    bdb-no-recover
    bdb-lock-detect=DEFAULT
    bdb-logdir=/global/mnt3/mysql-data/BDB
    bdb-tmpdir=/global/mnt3/mysql-data/BDB
    #bdb_max_lock=10000
    
    
    # Replikering Slave
    #server-id=2
    #master-host=administerix
    #master-user=repl
    #master-password=repl
    #master-info-file=/global/mnt3/mysql-data/logs/master.info
    
    # MySQL 4.x
    #relay-log=/global/mnt3/mysql-data/logs/slave-bin.log
    #relay-log-info-file=/global/mnt3/mysql-data/logs/slave-info
  10. Change the ownership of the MySQL data directory.


    zone# chown -R mysql:mysql /global/mnt3/mysql-data
    
  11. Change the permission of the my.cnf file.


    zone# chmod 644 /global/mnt3/mysql-data/my.cnf
    

ProcedureExample: Modifying the MySQL Configuration Files

  1. Copy the MySQL database configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscmys/util/mysql_config /config-files
    
  2. Add this cluster's information to the mysql_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    MYSQL_BASE=/usr/local/mysql
    MYSQL_USER=root
    MYSQL_PASSWD=root
    MYSQL_HOST=ha-host-1
    FMUSER=fmuser
    FMPASS=fmuser
    MYSQL_SOCK=/tmp/ha-host-1.sock
    MYSQL_NIC_HOSTNAME="ha-host-1"
    MYSQL_DATADIR=/global/mnt3/mys-data
    
  3. Save and close the file.

ProcedureExample: Enabling the MySQL Software to Run in the Cluster

  1. Start the MySQL database manually on the node where the resource group is online.


    zone# cd /usr/local/mysql
    zone# ./bin/mysqld --defaults-file=/global/mnt3/mysql-data/my.cnf \
    --basedir=/usr/local/mysql --datadir=/global/mnt3/mysql-data \
    --pid-file=/global/mnt3/mysql-data/mysqld.pid \
    --user=mysql >> /global/mnt3/mysql-data/logs/ha-host-1.log 2>&1 &
    
  2. Set the password for localhost in MySQL to root.


    zone# /usr/local/mysql/bin/mysqladmin -S /tmp/ha-host-1.sock -uroot \
    password 'root'
    
  3. Add an admin user in the MySQL database for the logical host.


    zone# /usr/local/mysql/bin/mysql -S /tmp/ha-host-1.sock -uroot -proot
    mysql> use mysql;
    mysql> GRANT ALL ON *.* TO 'root'@'ha-host-1' IDENTIFIED BY 'root';
    mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='ha-host-1';
    mysql> exit
    
  4. Prepare the Sun Cluster specific test database.


    zone# ksh /opt/SUNWscmys/util/mysql_register -f /config-files/mysql_config
    
  5. Stop the MySQL database.


    zone# kill -TERM `cat /global/mnt3/mysql-data/mysqld.pid`
    
  6. Leave the zone

  7. Copy the MySQL database configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscmys/util/ha_mysql_config /global/mnt3
    
  8. Add this cluster's information to the ha_mysql_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-MYS
    RG=RG-MYS
    PORT=5432
    LH=ha-host-1
    HAS_RS=RS-MYS-HAS
    .
    .
    .
    ZONE=clu1
    ZONE_BT=RS-MYS-ZONE
    PROJECT=.
    .
    .
    BASEDIR=/usr/local/mysql
    DATADIR=/global/mnt3/mysql-data
    MYSQLUSER=mysql
    MYSQLHOST=ha-host-1
    FMUSER=fmuser
    FMPASS=fmuser
    LOGDIR=/global/mnt3/mys-data/logs
    CHECK=YES
    
  9. Save and close the file.

  10. Run the ha_mysql_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscmys/util/ha_mysql_register \
    -f /global/mnt3/ha_mysql_config
    
  11. Enable the resource.


    phys-schost-1# clresource enable RS-MYS