This appendix presents a complete example of how to install and configure the MySQL application and data service in a non-global zone. It presents a simple two-node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual. For an example of MySQL in the global zone, see Deployment Example: Installing MySQL in the Global Zone. For installation in an HA container, see Deployment Example: Installing MySQL in the Non-Global HA Container.
This example uses a two-node cluster with the following node names:
phys-schost-1 (a physical node, which owns the file system)
zone-1 (a zone defined on phys-schost-1, which owns the file system)
phys-schost-2 (a physical node)
zone-2 (a physical node)
This deployment example uses the following software products and versions:
Solaris 10 6/06 software for SPARC or x86 platforms
Sun Cluster 3.2 core software
Sun Cluster Data Service for PostgreSQL
MySQL version 5.0.22 tarball
This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.
The instructions in this example were developed with the following assumptions:
Shell environment: All commands and the environment setup in this example are for the Korn shell environment. If you use a different shell, replace any Korn shell-specific information or instructions with the appropriate information for you preferred shell environment.
User login: Unless otherwise specified, perform all procedures as superuser or assume a role that provides solaris.cluster.admin, solaris.cluster.modify, and solaris.cluster.read RBAC authorization.
These instructions assume that you are installing the MySQL software as the mysql user in a local directory.
The tasks you must perform to install and configure MySQL in the zone are as follows:
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on both nodes.
Sun Cluster core software
Sun Cluster data service for MySQL
In this task you will install the Solaris Container on phys-schost-1 and phys-schost-2. Therefore perform this procedure on both hosts.
On local cluster storage of , create a directory for the zone root path.
This example presents a sparse root zone. You can use a whole root zone if that type better suits your configuration.
phys-schost-1# mkdir /zones |
Create a temporary file, for example /tmp/x, and include the following entries:
create -b set zonepath=/zones/clu1 set autoboot=true set pool=pool_default add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add net set address=zone-1 Choose a different addtress (zone-2) on the second node. set physical=hme0 end add attr set name=comment set type=string set value="MySQL cluster zone" Put your desired zone name between the quotes here. end |
Configure the HA container, using the file you created.
phys-schost-1# zonecfg -z clu1 -f /tmp/x |
Install the zone.
phys-schost-1# zoneadm -z clu1 install |
Log in to the zone.
phys-schost-1# zlogin -C clu1 |
Open a new window to the same node and boot the zone?
phys-schost-1# zoneadm -z clu1 boot |
Close this terminal window and disconnect from the zone console.
phys-schost-1# ~~. |
Register the HAStoragePlus resource type.
phys-schost-1# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create the MySQL resource group.
phys-schost-1# clresourcegroup create -n phys-host-1:clu1,phys-host-2:clu1 RG-MYS |
Create the HAStoragePlus resource in the RG-MYS resource group.
phys-schost-1# clresource create -g RG-MYS -t SUNW.HAStoragePlus -p AffinityOn=TRUE \ -p FilesystemMountPoints=/global/mnt3,/global/mnt4 RS-MYS-HAS |
Enable the resource group.
phys-schost-1# clresourcegroup online -M RG-MYS |
These steps illustrate how to install the MySQL software in the default directory /usr/local/mysql. As long as only one node is mentioned it needs to be the node where your resource group is online. It is assumed, that you inherited /usr, so you can not write to /usr/local in the zones. If you linked /usr/local to a local directory, start at step 4.
Add the mysql user.
phys-schost-1# groupadd -g 1000 mysql phys-schost-1# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql phys-schost-2# groupadd -g 1000 mysql phys-schost-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql |
Install the MySQL binaries on both nodes.
phys-schost-1# cd /usr/local phys-schost-1# tar xvf mysql-max-5.0.22-solaris10-architcture_64.tar.gz phys-schost-1# ln -s mysql-max-5.0.22-solaris10-architcture_64 mysql |
Change the ownership of the MySQL binaries on both nodes.
phys-schost-1# chown -R mysql:mysql /usr/local/mysql |
Log in to the zone.
phys-schost-1# zlogin clu1 phys-schost-2# zlogin clu1 |
Add the mysql group and user.
zone-1# groupadd -g 1000 mysql zone-1# groupadd -g 1000 mysql zone-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql zone-2# useradd -g 1000 -d /global/mnt3/mysql -m -s /bin/ksh mysql |
Leave the zone.
These steps illustrate how to bootstrapp the MySQL software in the default directory /usr/local/pgsql. As long as only one node is mentioned it needs to be the node where your resource group is online.
Log in to the zone
phys-schost-1# zlogin clu1 |
Create your data base directories.
zone-1# mkdir -p /global/mnt3/mysql-data/logs zone-1# mkdir /global/mnt3/mysql-data/innodb zone-1# mkdir /global/mnt3/mysql-data/BDB |
Bootstrap MySQL.
zone-1# cd /usr/local/mysql zone-1# ./scripts/* --datadir=/global/mnt3/mysql-dat |
Create your my.cnf config-file in /global/mnt3/mysql-data.
zone-1# cat > /global/mnt3/mysql-data/my.cnf << EOF [mysqld] server-id=1 #port=3306 # 10.18.5.1 is the address of the logical host bind-address=10.18.5.1 # this is the address of the logical host socket=/tmp/ha-host-1.sock log=/global/mnt3/mysql-data/logs/log1 log-bin=/global/mnt3/mysql-data/logs/bin-log binlog-ignore-db=sc3_test_database log-slow-queries=/global/mnt3/mysql-data/logs/log-slow-queries #log-update=/global/mnt3/mysql-data/logs/log-update # Innodb #skip-innodb innodb_data_home_dir = /global/mnt3/mysql-data/innodb innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /global/mnt3/mysql-data/innodb innodb_log_arch_dir = /global/mnt3/mysql-data/innodb # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high set-variable = innodb_buffer_pool_size=50M set-variable = innodb_additional_mem_pool_size=20M # Set .._log_file_size to 25 % of buffer pool size set-variable = innodb_log_file_size=12M set-variable = innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=1 set-variable = innodb_lock_wait_timeout=50 # BDB # uncomment the skip-bdb if you used a binary download. # binary downloads come very often without the bdb support. #skip-bdb bdb-home=/global/mnt3/mysql-data bdb-no-recover bdb-lock-detect=DEFAULT bdb-logdir=/global/mnt3/mysql-data/BDB bdb-tmpdir=/global/mnt3/mysql-data/BDB #bdb_max_lock=10000 # Replikering Slave #server-id=2 #master-host=administerix #master-user=repl #master-password=repl #master-info-file=/global/mnt3/mysql-data/logs/master.info # MySQL 4.x #relay-log=/global/mnt3/mysql-data/logs/slave-bin.log #relay-log-info-file=/global/mnt3/mysql-data/logs/slave-info |
Change the ownership of the MySQL data directory.
zone-1# chown -R mysql:mysql /global/mnt3/mysql-data |
Change the permission of the my.cnf file.
zone-1# chmod 644 /global/mnt3/mysql-data/my.cnf |
In this task you will create the configuration file to prepare the MySQL database. It is assumed, that you are still logged in to this zone.
Copy the MySQL database configuration file from the agent directory to its deployment location.
zone-1# cp /opt/SUNWscmys/util/mysql_config /config-files |
Add this cluster's information to the mysql_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . MYSQL_BASE=/usr/local/mysql MYSQL_USER=root MYSQL_PASSWD=root MYSQL_HOST=ha-host-1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/ha-host-1.sock MYSQL_NIC_HOSTNAME="zone-1 zone-2" MYSQL_DATADIR=/global/mnt3/mys-data |
Save and close the file.
This task will initialize and prepare your database, it is essential, that you perform it on one node only. It is assumed, that you are still logged in to this zone.
Start the MySQL database manually on the zone where the resource group is online.
zone-1# cd /usr/local/mysql zone-1# ./bin/mysqld --defaults-file=/global/mnt3/mysql-data/my.cnf \ --basedir=/usr/local/mysql --datadir=/global/mnt3/mysql-data \ --pid-file=/global/mnt3/mysql-data/mysqld.pid \ --user=mysql >> /global/mnt3/mysql-data/logs/ha-host-1.log 2>&1 & |
Set the password for localhost in MySQL to root.
zone-1# /usr/local/mysql/bin/mysqladmin -S /tmp/ha-host-1.sock -uroot \ password 'root' |
Add an admin user in the MySQL database for the logical host.
zone-1# /usr/local/mysql/bin/mysql -S /tmp/ha-host-1.sock -uroot -proot mysql> use mysql; mysql> GRANT ALL ON *.* TO 'root'@'zone-1' IDENTIFIED BY 'root'; mysql> GRANT ALL ON *.* TO 'root'@'zone-2' IDENTIFIED BY 'root'; mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zone-1'; mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='zone-2'; mysql> exit |
Prepare the Sun Cluster specific test database.
zone-1# ksh /opt/SUNWscmys/util/mysql_register -f /global/mnt3/mysql_config |
Stop the MySQL database.
zone-1# kill -TERM `cat /global/mnt3/mysql-data/mysqld.pid` |
Leave the zone.
Copy the MySQL database configuration file from the agent directory to its deployment location.
phys-schost-1# cp /opt/SUNWscmys/util/ha_mysql_config /global/mnt3 |
Add this cluster's information to the ha_mysql_config configuration file.
The following listing shows the relevant file entries and the values to assign to each entry.
. . . RS=RS-MYS RG=RG-MYS PORT=5432 LH=ha-host-1 HAS_RS=RS-MYS-HAS . . . BASEDIR=/usr/local/mysql DATADIR=/global/mnt3/mysql-data MYSQLUSER=mysql MYSQLHOST=ha-host-1 FMUSER=fmuser FMPASS=fmuser LOGDIR=/global/mnt3/mys-data/logs CHECK=YES |
Save and close the file.
Run the ha_mysql_register script to register the resource.
phys-schost-1# ksh /opt/SUNWscmys/util/ha_mysql_register \ -f /global/mnt3/ha_mysql_config |
Enable the resource.
phys-schost-1# clresource enable RS-MYS |