Sun Cluster Data Service for MySQL Guide for Solaris OS

Appendix C Deployment Example: Installing MySQL in a Non-Global Zone

This appendix presents a complete example of how to install and configure the MySQL application and data service in a non-global zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of MySQL in the global zone, see Appendix A, Deployment Example: Installing MySQL in the Global Zone. For an installation in a failover zone, see Appendix B, Deployment Example: Installing MySQL in the Non — Global Failover Zone.

Target Cluster Configuration

This example uses a two-node cluster with the following node names:

Software Configuration

This deployment example uses the following software products and versions:

This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.

Assumptions

The instructions in this example were developed with the following assumptions:

Installing and Configuring MySQL on Local Storage in a Non-Global Zone

These instructions assume that you are installing the MySQL software as the mysql user in a local directory.

The tasks you must perform to install and configure MySQL in the zone are as follows:

ProcedureExample: Preparing the Cluster for MySQL

    Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on both nodes.

    • Sun Cluster core software

    • Sun Cluster data service for MySQL

ProcedureExample: Configuring the Zone

In this task you will install the Solaris Container on phys-schost-1 and phys-schost-2. Therefore perform this procedure on both hosts.

  1. On local cluster storage of , create a directory for the zone root path.

    This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.


    phys-schost-1# mkdir /zones
    
  2. Create a temporary file, for example /tmp/x, and include the following entries:


    create -b
    set zonepath=/zones/clu1
    set autoboot=true
    set pool=pool_default
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=zone-1 Choose a different addtress (zone-2) on the second node.
    set physical=hme0
    end
    add attr
    set name=comment
    set type=string
    set value="MySQL cluster zone" Put your desired zone name between the quotes here.
    end
  3. Configure the failover zone, using the file you created.


    phys-schost-1# zonecfg -z clu1 -f /tmp/x
    
  4. Install the zone.


    phys-schost-1# zoneadm -z clu1 install
    
  5. Log in to the zone.


    phys-schost-1# zlogin -C clu1
    
  6. Open a new window to the same node and boot the zone?


    phys-schost-1# zoneadm -z clu1 boot
    
  7. Close this terminal window and disconnect from the zone console.


    phys-schost-1# ~~.
    

ProcedureExample: Configuring Cluster Resources for MySQL

  1. Register the HAStoragePlus resource type.


    phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus
    
  2. Create the MySQL resource group.


    phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-MYS
    
  3. Create the HAStoragePlus resource in the RG-MYS resource group.


    phys-schost-1# clresource create -g RG-MYS -t SUNW.HAStoragePlus -p AffinityOn=TRUE \
    -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-MYS-HAS
    
  4. Enable the resource group.


    phys-schost-1# clresourcegroup online -M RG-MYS
    

ProcedureExample: Installing the MySQL Software on Local Storage

These steps illustrate how to install the MySQL software in the default directory /usr/local/mysql. As long as only one node is mentioned it needs to be the node where your resource group is online. It is assumed, that you inherited /usr, so you can not write to /usr/local in the zones. If you linked /usr/local to a local directory, start at step 4.

  1. Add the mysql user.


    phys-schost-1# groupadd -g 1000 mysql
    phys-schost-1# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    phys-schost-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    
  2. Install the MySQL binaries on both nodes.


    phys-schost-1# cd /usr/local
    phys-schost-1# tar xvf mysql-max-5.0.22-solaris10-architcture_64.tar.gz
    phys-schost-1# ln -s mysql-max-5.0.22-solaris10-architcture_64 mysql
    
  3. Change the ownership of the MySQL binaries on both nodes.


    phys-schost-1# chown -R mysql:mysql /usr/local/mysql
    
  4. Log in to the zone.


    phys-schost-1# zlogin clu1
    phys-schost-2# zlogin clu1
    
  5. Add the mysql group and user.


    zone-1# groupadd -g 1000 mysql
    zone-1# groupadd -g 1000 mysql
    zone-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    zone-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql
    
  6. Leave the zone.

ProcedureExample: Bootstrapping the MySQL Software on Local Storage

These steps illustrate how to bootstrapp the MySQL software in the default directory /usr/local/pgsql. As long as only one node is mentioned it needs to be the node where your resource group is online.

  1. Log in to the zone


    phys-schost-1# zlogin clu1
    
  2. Create your data base directories.


    zone-1# mkdir -p /global/mnt3/mysql-data/logs
    zone-1# mkdir /global/mnt3/mysql-data/innodb
    zone-1# mkdir /global/mnt3/mysql-data/BDB
    
  3. Bootstrap MySQL.


    zone-1# cd /usr/local/mysql
    zone-1# ./scripts/* --datadir=/global/mnt3/mysql-dat
    
  4. Create your my.cnf config-file in /global/mnt3/mysql-data


    zone-1# cat > /global/mnt3/mysql-data/my.cnf << EOF
    [mysqld]
    server-id=1
    #port=3306
    # 10.18.5.1 is the address of the logical host
    bind-address=10.18.5.1 # this is the address of the logical host
    socket=/tmp/ha-host-1.sock
    log=/global/mnt3/mysql-data/logs/log1
    log-bin=/global/mnt3/mysql-data/logs/bin-log
    binlog-ignore-db=sc3_test_database
    log-slow-queries=/global/mnt3/mysql-data/logs/log-slow-queries
    #log-update=/global/mnt3/mysql-data/logs/log-update
    
    # Innodb
    #skip-innodb
    innodb_data_home_dir = /global/mnt3/mysql-data/innodb
    innodb_data_file_path = ibdata1:10M:autoextend
    innodb_log_group_home_dir = /global/mnt3/mysql-data/innodb
    innodb_log_arch_dir = /global/mnt3/mysql-data/innodb
    # You can set .._buffer_pool_size up to 50 - 80 %
    # of RAM but beware of setting memory usage too high
    set-variable = innodb_buffer_pool_size=50M
    set-variable = innodb_additional_mem_pool_size=20M
    # Set .._log_file_size to 25 % of buffer pool size
    set-variable = innodb_log_file_size=12M
    set-variable = innodb_log_buffer_size=4M
    innodb_flush_log_at_trx_commit=1
    set-variable = innodb_lock_wait_timeout=50
    
    # BDB
    # uncomment the skip-bdb if you used a binary download.
    # binary downloads come very often without the bdb support.
    #skip-bdb
    bdb-home=/global/mnt3/mysql-data
    bdb-no-recover
    bdb-lock-detect=DEFAULT
    bdb-logdir=/global/mnt3/mysql-data/BDB
    bdb-tmpdir=/global/mnt3/mysql-data/BDB
    #bdb_max_lock=10000
    
    
    # Replikering Slave
    #server-id=2
    #master-host=administerix
    #master-user=repl
    #master-password=repl
    #master-info-file=/global/mnt3/mysql-data/logs/master.info
    
    # MySQL 4.x
    #relay-log=/global/mnt3/mysql-data/logs/slave-bin.log
    #relay-log-info-file=/global/mnt3/mysql-data/logs/slave-info
  5. Change the ownership of the MySQL data directory.


    zone-1# chown -R mysql:mysql /global/mnt3/mysql-data
    
  6. Change the permission of the my.cnf file.


    zone-1# chmod 644 /global/mnt3/mysql-data/my.cnf
    

ProcedureExample: Modifying the MySQL Configuration File

In this task you will create the configuration file to prepare the MySQL database. It is assumed, that you are still logged in to this zone.

  1. Copy the MySQL database configuration file from the agent directory to its deployment location.


    zone-1# cp /opt/SUNWscmys/util/mysql_config /config-files
    
  2. Add this cluster's information to the mysql_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    MYSQL_BASE=/usr/local/mysql
    MYSQL_USER=root
    MYSQL_PASSWD=root
    MYSQL_HOST=ha-host-1
    FMUSER=fmuser
    FMPASS=fmuser
    MYSQL_SOCK=/tmp/ha-host-1.sock
    MYSQL_NIC_HOSTNAME="zone-1 zone-2"
    
  3. Save and close the file.

ProcedureExample: Enabling the MySQL Software to Run in the Cluster

This task will initialize and prepare your database, it is essential, that you perform it on one node only. It is assumed, that you are still logged in to this zone.

  1. Start the MySQL database manually on the zone where the resource group is online.


    zone-1# cd /usr/local/mysql
    zone-1# ./bin/mysqld --defaults-file=/global/mnt3/mysql-data/my.cnf \
    --basedir=/usr/local/mysql --datadir=/global/mnt3/mysql-data \
    --pid-file=/global/mnt3/mysql-data/mysqld.pid \
    --user=mysql >> /global/mnt3/mysql-data/logs/ha-host-1.log 2>&1 &
    
  2. Set the password for localhost in MySQL to root.


    zone-1# /usr/local/mysql/bin/mysqladmin -S /tmp/ha-host-1.sock -uroot \
    password 'root'
    
  3. Add an admin user in the MySQL database for the logical host.


    zone-1# /usr/local/mysql/bin/mysql -S /tmp/ha-host-1.sock -uroot -proot
    mysql> use mysql;
    mysql> GRANT ALL ON *.* TO 'root'@'zone-1' IDENTIFIED BY 'root';
    mysql> GRANT ALL ON *.* TO 'root'@'zone-2' IDENTIFIED BY 'root';
    mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zone-1';
    mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zone-2';
    mysql> exit
    
  4. Prepare the Sun Cluster specific test database.


    zone-1# ksh /opt/SUNWscmys/util/mysql_register -f /global/mnt3/mysql_config
    
  5. Stop the MySQL database.


    zone-1# kill -TERM `cat /global/mnt3/mysql-data/mysqld.pid`
    
  6. Leave the zone.

  7. Copy the MySQL database configuration file from the agent directory to its deployment location.


    phys-schost-1# cp /opt/SUNWscmys/util/ha_mysql_config /global/mnt3
    
  8. Add this cluster's information to the ha_mysql_config configuration file.

    The following listing shows the relevant file entries and the values to assign to each entry.


    .
    .
    .
    RS=RS-MYS
    RG=RG-MYS
    PORT=5432
    LH=ha-host-1
    HAS_RS=RS-MYS-HAS
    .
    .
    .
    BASEDIR=/usr/local/mysql
    DATADIR=/global/mnt3/mys-data
    MYSQLUSER=mysql
    MYSQLHOST=ha-host-1
    FMUSER=fmuser
    FMPASS=fmuser
    LOGDIR=/global/mnt3/mys-data/logs
    CHECK=YES
    
  9. Save and close the file.

  10. Run the ha_mysql_register script to register the resource.


    phys-schost-1# ksh /opt/SUNWscmys/util/ha_mysql_register \
    -f /global/mnt3/ha_mysql_config
    
  11. Enable the resource.


    phys-schost-1# clresource enable RS-MYS