This appendix presents a complete example of how to install and configure the MySQL application and data service in a non — global failover zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of MySQL installation in a global zone, see Appendix A, Deployment Example: Installing MySQL in the Global Zone. For an installation in a zone, see Appendix C, Deployment Example: Installing MySQL in a Non-Global Zone.
This example uses a two-node cluster with the following node names:
phys-schost-1 (a physical node, which owns the file system)
phys-schost-2 (a physical node)
clu1 the zone to be failed over
This configuration also uses the logical host name ha-host-1.
This deployment example uses the following software products and versions:
Solaris 10 6/06 software for SPARC or x86 platforms
Sun Cluster 3.2 core software
Sun Cluster HA for MySQL
Sun Cluster HA for Solaris Container
MySQL max version 5.0.22
Your preferred text editor
This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.
The instructions in this example were developed with the following assumptions:
Shell environment: All commands and the environment setup in this example are for the Korn shell environment. If you use a different shell, replace any Korn shell-specific information or instructions with the appropriate information for you preferred shell environment.
User login: Unless otherwise specified, perform all procedures as superuser or assume a role that provides solaris.cluster.admin, solaris.cluster.modify, and solaris.cluster.read RBAC authorization.
The tasks you must perform to install and configure MySQL in the failover zone are as follows:
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on both nodes.
Sun Cluster core software
Sun Cluster data service for MySQL
Sun Cluster data service for Solaris Containers
Beginning on the node that owns the file system, add the mysql user.
phys-schost-1# groupadd -g 1000 mysql phys-schost-1# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql phys-schost-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql |
Register the necessary data types on both nodes.
phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create the MySQL resource group.
phys-schost-1# clresourcegroup create RG-MYS |
Create the HAStoragePlus resource in the RG-MYS resource group.
phys-schost-1# clresource create -g RG-MYS -t SUNW.HAStoragePlus -p AffinityOn=TRUE \ -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-MYS-HAS |
Enable the resource group.
phys-schost-1# clresourcegroup online -M RG-MYS |
On shared cluster storage, create a directory for the failover zone root path.
This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.
phys-schost-1# mkdir /global/mnt3/zones |
Create a temporary file, for example /tmp/x, and include the following entries:
create -b set zonepath=/global/mnt3/zones/clu1 set autoboot=false set pool=pool_default add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add net set address=ha-host-1 set physical=hme0 end add attr set name=comment set type=string set value="MySQL cluster zone" Put your desired zone name between the quotes here. end |
Configure the failover zone, using the file you created.
phys-schost-1# zonecfg -z clu1 -f /tmp/x |
Install the zone.
phys-schost-1# zoneadm -z clu1 install |
Log in to the zone.
phys-schost-1# zlogin -C clu1 |
Open a new window to the same node and boot the zone?
phys-schost-1a# zoneadm -z clu1 boot |
Close this terminal window and disconnect from the zone console.
phys-schost-1# ~~. |
Copy the containers configuration file to a temporary location.
phys-schost-1# cp /opt/SUNWsczone/sczbt/util/sczbt_config /tmp/sczbt_config |
Edit the /tmp/sczbt_config file and set variable values as shown:
RS=RS-MYS-ZONE RG=RG-MYS PARAMETERDIR=/global/mnt3/zonepar SC_NETWORK=false SC_LH= FAILOVER=true HAS_RS=RS-MYS-HAS Zonename=clu1 Zonebootopt= Milestone=multi-user-server Mounts= |
Create the zone according to the instructions in the Sun Cluster Data Service for Solaris Containers Guide.
Register the zone resource.
phys-schost-1# ksh /opt/SUNWsczone/sczbt/util/sczbt_register -f /tmp/sczbt_config |
Enable the zone resource.
phys-schost-1# clresource enable RS-MYS-ZONE |
These steps illustrate how to install the MySQL software in the default directory /usr/local/MYSql. As long as only one node is mentioned it needs to be the node where your resource group is online. It is assumed, that you inherited /usr, so you can not write to /usr/local in the zones. If you linked /usr/local to a local directory, start at step 4
Add the mysql user.
phys-schost-1# groupadd -g 1000 mysql phys-schost-1# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql phys-schost-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql |
Install the MySQL binaries on both nodes.
phys-schost-1# cd /usr/local phys-schost-1# tar xvf mysql-max-5.0.22-solaris10-architcture_64.tar.gz phys-schost-1# ln -s mysql-max-5.0.22-solaris10-x86_64 mysql |
Change the ownership of the MySQL binaries on both nodes.
phys-schost-1# chown -R mysql:mysql /usr/local/mysql |
Log in to the zone.
phys-schost-1# zlogin clu1 |
Create the parent for the tomcats home directory.
zone# mkdir -p /global/mnt3 |
Add the mysql user.
zone# groupadd -g 1000 mysql zone# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql |
Create your data base directories.
zone# mkdir -p /global/mnt3/mysql-data/logs zone# mkdir /global/mnt3/mysql-data/innodb zone# mkdir /global/mnt3/mysql-data/BDB |
Bootstrap MySQL.
zone# cd /usr/local/mysql zone# ./scripts/* --datadir=/global/mnt3/mysql-data |
Create your my.cnf config-file in /global/mnt3/mysql-data
zone# cat > /global/mnt3/mysql-data/my.cnf << EOF [mysqld] server-id=1 #port=3306 # 10.18.5.1 is the address of the logical host bind-address=10.18.5.1 # this is the address of the logical host socket=/tmp/ha-host-1.sock log=/global/mnt3/mysql-data/logs/log1 log-bin=/global/mnt3/mysql-data/logs/bin-log binlog-ignore-db=sc3_test_database log-slow-queries=/global/mnt3/mysql-data/logs/log-slow-queries #log-update=/global/mnt3/mysql-data/logs/log-update # Innodb #skip-innodb innodb_data_home_dir = /global/mnt3/mysql-data/innodb innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /global/mnt3/mysql-data/innodb innodb_log_arch_dir = /global/mnt3/mysql-data/innodb # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high set-variable = innodb_buffer_pool_size=50M set-variable = innodb_additional_mem_pool_size=20M # Set .._log_file_size to 25 % of buffer pool size set-variable = innodb_log_file_size=12M set-variable = innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=1 set-variable = innodb_lock_wait_timeout=50 # BDB # uncomment the skip-bdb if you used a binary download. # binary downloads come very often without the bdb support. #skip-bdb bdb-home=/global/mnt3/mysql-data bdb-no-recover bdb-lock-detect=DEFAULT bdb-logdir=/global/mnt3/mysql-data/BDB bdb-tmpdir=/global/mnt3/mysql-data/BDB #bdb_max_lock=10000 # Replikering Slave #server-id=2 #master-host=administerix #master-user=repl #master-password=repl #master-info-file=/global/mnt3/mysql-data/logs/master.info # MySQL 4.x #relay-log=/global/mnt3/mysql-data/logs/slave-bin.log #relay-log-info-file=/global/mnt3/mysql-data/logs/slave-info |
Change the ownership of the MySQL data directory.
zone# chown -R mysql:mysql /global/mnt3/mysql-data |
Change the permission of the my.cnf file.
zone# chmod 644 /global/mnt3/mysql-data/my.cnf |
Copy the MySQL database configuration file from the agent directory to its deployment location.
phys-schost-1# cp /opt/SUNWscmys/util/mysql_config /config-files |
Add this cluster's information to the mysql_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . YSQL_BASE=/usr/local/mysql MYSQL_USER=root MYSQL_PASSWD=root MYSQL_HOST=ha-host-1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/ha-host-1.sock MYSQL_NIC_HOSTNAME="phys-schost-1 phys-schost-2" |
Save and close the file.
Start the MySQL database manually on the node where the resource group is online.
zone# cd /usr/local/mysql zone# ./bin/mysqld --defaults-file=/global/mnt3/mysql-data/my.cnf \ --basedir=/usr/local/mysql --datadir=/global/mnt3/mysql-data \ --pid-file=/global/mnt3/mysql-data/mysqld.pid \ --user=mysql >> /global/mnt3/mysql-data/logs/ha-host-1.log 2>&1 & |
Set the password for localhost in MySQL to root.
zone# /usr/local/mysql/bin/mysqladmin -S /tmp/ha-host-1.sock -uroot \ password 'root' |
Add an admin user in the MySQL database for the logical host.
zone# /usr/local/mysql/bin/mysql -S /tmp/ha-host-1.sock -uroot -proot mysql> use mysql; mysql> GRANT ALL ON *.* TO 'root'@'zone' IDENTIFIED BY 'root'; mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zone'; mysql> exit |
Prepare the Sun Cluster specific test database.
zone# ksh /opt/SUNWscmys/util/mysql_register -f /config-files/mysql_config |
Stop the MySQL database.
zone# kill -TERM `cat /global/mnt3/mysql-data/mysqld.pid` |
Leave the zone
Copy the MySQL database configuration file from the agent directory to its deployment location.
phys-schost-1# cp /opt/SUNWscmys/util/ha_mysql_config /global/mnt3 |
Add this cluster's information to the ha_mysql_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . RS=RS-MYS RG=RG-MYS PORT=5432 LH=ha-host-1 HAS_RS=RS-MYS-HAS . . . ZONE=clu1 ZONE_BT=RS-MYS-ZONE PROJECT=. . . BASEDIR=/usr/local/mysql DATADIR=/global/mnt3/mys-data MYSQLUSER=mysql MYSQLHOST=ha-host-1 FMUSER=fmuser FMPASS=fmuser LOGDIR=/global/mnt3/mys-data/logs CHECK=YES |
Save and close the file.
Run the ha_mysql_register script to register the resource.
phys-schost-1# ksh /opt/SUNWscmys/util/ha_mysql_register \ -f /global/mnt3/ha_mysql_config |
Enable the resource.
phys-schost-1# clresource enable RS-MYS |