This chapter explains how to install and configure Sun Cluster HA for MySQL.
This chapter contains the following sections.
Planning the Sun Cluster HA for MySQL Installation and Configuration
Verifying the Sun Cluster HA for MySQL Installation and Configuration
Upgrade to MySQL 4.x.x from 3.23.54 when using Sun Cluster HA for MySQL
Table 1 lists the tasks for installing and configuring Sun Cluster HA for MySQL. Perform these tasks in the order that they are listed.
Table 1 Task Map: Installing and Configuring Sun Cluster HA for MySQL
Task |
For Instructions, Go To |
---|---|
1. Plan the installation. |
Sun Cluster HA for MySQL Overview Planning the Sun Cluster HA for MySQL Installation and Configuration |
2. Install and configure MySQL. | |
3. Verify installation and configuration. | |
4. Install Sun Cluster HA for MySQL Packages. | |
5. Register and Configure Sun Cluster HA for MySQL. | |
6. Verify Sun Cluster HA for MySQL Installation and Configuration. |
How to Verify the Sun Cluster HA for MySQL Installation and Configuration |
7. Understand Sun Cluster HA for MySQL fault monitor. | |
8. Debug Sun Cluster HA for MySQL. | |
9. Upgrade to SC3.2 when using Sun Cluster HA for MySQL. | |
10. Upgrade to MySQL 4.0.15 from 3.23.54 when using Sun Cluster HA for MySQL. |
Upgrade to MySQL 4.x.x from 3.23.54 when using Sun Cluster HA for MySQL |
The MySQL software delivers a fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.MySQL is a trademark of MySQL ABTM.
MySQL is freely available under the GNU General Public License, and you can downloaded it from http://www.mysql.com.
The Sun Cluster HA for MySQL data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the MySQL service. The following MySQL components are protected by the Sun Cluster HA for MySQL data service.
Table 2 Protection of Components
Component |
Protected by |
---|---|
MySQL server |
Sun Cluster HA for MySQL |
This section contains the information you need to plan your Sun Cluster HA for MySQL installation and configuration.
Sun Cluster HA for MySQL is supported in Solaris Containers, Sun Cluster is offering two concepts for Solaris Containers.
Zones are containers which are running after a reboot of the node. These containers, combined with resource groups having the nodename nodename:zonename as a valid “nodename” in the resource groups nodename list.
Failover Zone containers are managed by the Solaris Container agent, and are represented by a resource of a resource group.
This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for MySQL only.
For restrictions that apply to all data services, see the Sun Cluster Release Notes.
Your data service configuration might not be supported if you do not observe these restrictions.
The Sun Cluster for MySQL can be configured only as a failover data service and not as a scalable data service.
The MySQL configuration file (my.cnf) should be placed only in the MySQL Database directory. If my.cnf has to be placed in local file system, and then create a symlink from the MySQL Database directory. The configuration file my.cnf should not be placed in /etc (Global file) because it will override command line options
The following configurations are supported with the Sun Cluster HA for MySQL data service.
Single/Multiple MySQL instances in master configuration
Single/Multiple MySQL instances in slave configuration
The master and slave instances shouldn't be on the same physical node.
Regardless of which MySQL delivery method you have chosen, that is from http://www.mysql.com or from other source, the following restrictions apply.
Each MySQL instance must have a unique Database directory. You can mount this Database directory as either a Failover File System or Global File System.
It is best practice to mount Global File Systems with the /global prefix and to mount Failover File Systems with the /local prefix.
The MySQL configuration in a failover zone uses the smf component of Sun Cluster HA for Solaris Containers. The registration of the MySQLdata service in a failover zone defines an smf service to control the MySQL database. The name of this smf service is generated in this naming scheme: svc:/application/sczone-agents:resource-name. No other smf service with exactly this name can exist.
The associated smf manifest is automatically created during the registration process in this location and naming scheme: /var/svc/manifest/application/sczone-agents/resource-name.xml. No other manifest can coexist with this name.
The following example shows MySQL installed onto a Global File System with two MySQL instances (mysql-data-1 and mysql-data-2) . The final output shows a subset of the /etc/vfstab entries for MySQL deployed using Veritas Volume Manager.
# ls -l /usr/local total 4 drwxrwxrwx 2 root other 512 Oct 1 16:44 bin lrwxrwxrwx 1 root other 13 Oct 11 11:20 mysql -> /global/mysql # # ls -l /global/mysql total 10432 drwxr-xr-x 13 mysql mysql 512 Mar 16 00:03 . drwxrwxrwx 7 root other 2048 Apr 11 09:53 .. -rw-r--r-- 1 mysql mysql 19106 Mar 15 23:29 COPYING -rw-r--r-- 1 mysql mysql 28003 Mar 15 23:29 COPYING.LIB -rw-r--r-- 1 mysql mysql 126466 Mar 15 16:47 ChangeLog -rw-r--r-- 1 mysql mysql 6811 Mar 15 23:29 INSTALL-BINARY -rw-r--r-- 1 mysql mysql 1937 Mar 15 16:47 README drwxr-xr-x 2 mysql mysql 1536 Mar 16 00:03 bin -rwxr-xr-x 1 mysql mysql 773 Mar 16 00:03 configure drwxr-x--- 4 mysql mysql 512 Mar 16 00:03 data drwxr-xr-x 2 mysql mysql 1024 Mar 16 00:03 include drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 lib drwxr-xr-x 3 mysql mysql 512 Mar 16 00:03 man -rw-r--r-- 1 mysql mysql 2676944 Mar 15 23:23 manual.html -rw-r--r-- 1 mysql mysql 2329252 Mar 15 23:23 manual.txt -rw-r--r-- 1 mysql mysql 98233 Mar 15 23:23 manual_toc.html drwxr-xr-x 6 mysql mysql 512 Mar 16 00:03 mysql-test drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 scripts drwxr-xr-x 3 mysql mysql 512 Mar 16 00:03 share drwxr-xr-x 7 mysql mysql 1024 Mar 16 00:03 sql-bench drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 support-files drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 tests # ls -l /global/mysql-data-1 Total 30 drwxrwxrwx 9 mysql mysql 512 Apr 15 12:06 . drwxrwxrwx 20 root root 1024 Apr 10 12:41 .. drwxr-xr-x 2 mysql mysql 512 Apr 15 12:00 BDB drwxrwxrwx 2 mysql mysql 512 Apr 15 11:59 innodb drwxrwxrwx 2 mysql mysql 2048 Apr 15 14:47 logs -rw-r--r-- 1 mysql mysql 1432 Apr 15 11:58 my.cnf drwx------ 2 mysql mysql 512 Apr 15 11:59 mysql -rw-rw---- 1 mysql mysql 5 Apr 15 14:47 mysqld.pid drwx------ 2 mysql mysql 512 Apr 15 14:53 sc3_test_database drwx------ 2 mysql mysql 512 Apr 15 11:58 test drwx------ 2 mysql mysql 512 Apr 15 12:00 testdb # # ls -l /global/mysql-data-2 total 32 drwxrwxrwx 9 mysql mysql 512 Apr 15 07:49 . drwxrwxrwx 20 root root 1024 Apr 10 12:41 .. drwxr-xr-x 2 mysql mysql 512 Apr 14 11:16 BDB drwxr-xr-x 2 mysql mysql 512 Apr 14 11:14 innodb drwxr-xr-x 2 mysql mysql 2560 Apr 15 10:15 logs -rw-r--r-- 1 mysql mysql 1459 Apr 14 11:13 my.cnf drwx------ 2 mysql mysql 512 Apr 14 11:14 mysql -rw-rw---- 1 mysql mysql 5 Apr 15 10:10 mysqld.pid drwx------ 2 mysql mysql 512 Apr 15 10:10 sc3_test_database drwx------ 2 mysql mysql 512 Apr 14 11:14 test drwx------ 2 mysql mysql 512 Apr 14 11:16 testdb # more /etc/vfstab (Subset of the output) /dev/vx/dsk/dg1/vol01 /dev/vx/rdsk/dg1/vol01 /global/mysql ufs 2 yes global,logging /dev/vx/dsk/dg2/vol01 /dev/vx/rdsk/dg2/vol01 /global/mysql-data-1 ufs 2 yes global,logging /dev/vx/dsk/dg2/vol01 /dev/vx/rdsk/dg2/vol01 /global/mysql-data-2 ufs 2 yes global,logging # |
In the above example the Database directory for the MySQL instance 1 is /global/mysql-data-1, whereas the Database directory for the MySQL instance 2 is /global/mysql-data-2.
The following example shows MySQL installed on Local File Systems and two MySQL instances (mysql-data-1 and mysql-data-2) on Failover File Systems. The final output shows a subset of the /etc/vfstab entries for MySQL deployed using Veritas Volume Manager.
# ls -l /usr/local/mysql total 10432 drwxr-xr-x 13 mysql mysql 512 Mar 16 00:03 . drwxrwxrwx 7 root other 2048 Apr 11 09:53 .. -rw-r--r-- 1 mysql mysql 19106 Mar 15 23:29 COPYING -rw-r--r-- 1 mysql mysql 28003 Mar 15 23:29 COPYING.LIB -rw-r--r-- 1 mysql mysql 126466 Mar 15 16:47 ChangeLog -rw-r--r-- 1 mysql mysql 6811 Mar 15 23:29 INSTALL-BINARY -rw-r--r-- 1 mysql mysql 1937 Mar 15 16:47 README drwxr-xr-x 2 mysql mysql 1536 Mar 16 00:03 bin -rwxr-xr-x 1 mysql mysql 773 Mar 16 00:03 configure drwxr-x--- 4 mysql mysql 512 Mar 16 00:03 data drwxr-xr-x 2 mysql mysql 1024 Mar 16 00:03 include drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 lib drwxr-xr-x 3 mysql mysql 512 Mar 16 00:03 man -rw-r--r-- 1 mysql mysql 2676944 Mar 15 23:23 manual.html -rw-r--r-- 1 mysql mysql 2329252 Mar 15 23:23 manual.txt -rw-r--r-- 1 mysql mysql 98233 Mar 15 23:23 manual_toc.html drwxr-xr-x 6 mysql mysql 512 Mar 16 00:03 mysql-test drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 scripts drwxr-xr-x 3 mysql mysql 512 Mar 16 00:03 share drwxr-xr-x 7 mysql mysql 1024 Mar 16 00:03 sql-bench drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 support-files drwxr-xr-x 2 mysql mysql 512 Mar 16 00:03 tests # ls -l /local/mysql-data-1 Total 30 drwxrwxrwx 9 mysql mysql 512 Apr 15 12:06 . drwxrwxrwx 20 root root 1024 Apr 10 12:41 .. drwxr-xr-x 2 mysql mysql 512 Apr 15 12:00 BDB drwxrwxrwx 2 mysql mysql 512 Apr 15 11:59 innodb drwxrwxrwx 2 mysql mysql 2048 Apr 15 14:47 logs -rw-r--r-- 1 mysql mysql 1432 Apr 15 11:58 my.cnf drwx------ 2 mysql mysql 512 Apr 15 11:59 mysql -rw-rw---- 1 mysql mysql 5 Apr 15 14:47 mysqld.pid drwx------ 2 mysql mysql 512 Apr 15 14:53 sc3_test_database drwx------ 2 mysql mysql 512 Apr 15 11:58 test drwx------ 2 mysql mysql 512 Apr 15 12:00 testdb # #ls -l /local/mysql-data-2 total 32 drwxrwxrwx 9 mysql mysql 512 Apr 15 07:49 . drwxrwxrwx 20 root root 1024 Apr 10 12:41 .. drwxr-xr-x 2 mysql mysql 512 Apr 14 11:16 BDB drwxr-xr-x 2 mysql mysql 512 Apr 14 11:14 innodb drwxr-xr-x 2 mysql mysql 2560 Apr 15 10:15 logs -rw-r--r-- 1 mysql mysql 1459 Apr 14 11:13 my.cnf drwx------ 2 mysql mysql 512 Apr 14 11:14 mysql -rw-rw---- 1 mysql mysql 5 Apr 15 10:10 mysqld.pid drwx------ 2 mysql mysql 512 Apr 15 10:10 sc3_test_database drwx------ 2 mysql mysql 512 Apr 14 11:14 test drwx------ 2 mysql mysql 512 Apr 14 11:16 testdb # more /etc/vfstab (Subset of the output) /dev/vx/dsk/dg2/vol01 /dev/vx/rdsk/dg2/vol01 /local/mysql-data-1 ufs 2 yes logging /dev/vx/dsk/dg2/vol01 /dev/vx/rdsk/dg2/vol01 /local/mysql-data-2 ufs 2 yes logging # |
In the above example the Database directory for the MySQL instance 1 is /local/mysql-data-1, whereas the Database directory for the MySQL instance 2 is /local/mysql-data-2.
The requirements in this section apply to Sun Cluster HA for MySQL only. You must meet these requirements before you proceed with your Sun Cluster HA for MySQL installation and configuration.
Your data service configuration might not be supported if you do not adhere to these requirements.
MySQL components and their dependencies —You can configure Sun Cluster HA for MySQL data service to protect a MySQL instance and its respective components. The components, and their dependencies between each other, are briefly described below.
In the above example the Database directory for the MySQL instance 1 is /local/mysql-data-1, whereas the Database directory for the MySQL instance 2 is /local/mysql-data-2.
Component |
Dependency |
---|---|
MySQL resource in a Solaris 10 global zone,zone or in Solaris 9. |
SUNW.HAStoragePlus This dependency is required only, if the configuration uses a failover file system or file systems in a zone. SUNW.LogicalHostName |
MySQL resource in a Solaris 10 failover zone. |
Sun Cluster HA for the Solaris Container boot resource. SUNW.HAStoragePlus SUNW.LogicalHostName — This dependency is required only if the zones boot resource does not manage the zone's IP address. |
The MySQL component has two configuration and registration files in /opt/SUNWscmys/util. These files allow you to register the MySQL component with Sun Cluster and prepare a MySQL instance to be registered.
Within these files, the appropriate dependencies have been applied.
# cd /opt/SUNWscmys # more util/*config :::::::::::::: ha_mysql_config :::::::::::::: # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)ha_mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by ha_mysql_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # # To have the mysql agent local zone aware, 4 Variables are needed: # ZONE - the zone name where the Mysql Database should run in # Optional # ZONEBT - The resource name which controls the zone. # Optional # PROJECT - A project in the zone, that will be used for this service # specify it if you have an su - in the start stop or probe, # or to define the smf credentials. If the variable is not set, # it will be translated as :default for the sm and default # for the zsh component # Optional # ZUSER - A user in the the zone which is used for the smf method # credentials. Yur smf servic e will run under this user # Optional # # Mysql specific Variables # # BASEDIR - name of the Mysql bin directory # DATADIR - name of the Mysql Data directory # MYSQLUSER - name of the user Mysql should be started of # LH - name of the LogicalHostname SC resource # MYSQLHOST - name of the host in /etc/hosts # FMUSER - name of the Mysql fault monitor user # FMPASS - name of the Mysql fault monitor user password # LOGDIR - name of the directory mysqld should store it's logfile. # CHECK - should HA-MySQL check MyISAM index files before start YES/NO. # HAS_RS - name of the MySQL HAStoragePlus SC resource # # The following examples illustrate sample parameters # for Mysql # # BASEDIR=/usr/local/mysql # DATADIR=/global/mysqldata # MYSQLUSER=mysql # LH=mysqllh # MYSQLHOST=mysqllh # FMUSER=fmuser # FMPASS=fmuser # LOGDIR=/global/mysqldata/logs # CHECK=YES # RS= RG= PORT= LH= HAS_RS= # local zone specific options ZONE= ZONE_BT= PROJECT= # mysql specifications BASEDIR= DATADIR= MYSQLUSER= MYSQLHOST= FMUSER= FMPASS= LOGDIR= CHECK= :::::::::::::: mysql_config :::::::::::::: # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by mysql_register and the parameters # listed below will be used. # # Where is mysql installed (BASEDIR) MYSQL_BASE= # Mysql admin-user for localhost (Default is root) MYSQL_USER= # Password for mysql admin user MYSQL_PASSWD= # Configured logicalhost MYSQL_HOST= # Specify a username for a faultmonitor user FMUSER= # Pick a password for that faultmonitor user FMPASS= # Socket name for mysqld ( Should be /tmp/<logical-host>.sock ) MYSQL_SOCK= # FOR SC3.1 ONLY, Specify the physical hostname for the # physical NIC that this logicalhostname belongs to for every node in the # cluster this Resourcegroup can located on. # IE: The logicalhost lh1 belongs to hme1 for physical-node phys-1 and # hme3 for physical-node phys-2. The hostname for hme1 is phys-1-hme0 and # for hme3 on phys-2 it is phys-2-hme3. # IE: MYSQL_NIC_HOSTNAME="phys-1-hme0 phys-2-hme3" MYSQL_NIC_HOSTNAME= # |
my.cnf file — The Sun Cluster HA for MySQL data service provides two sample my.cnf files, one sample file for a master configuration and one for a slave configuration. However, ensure that at least the following parameters are set.
The my.cnf file is a important file within MySQL. Refer to the MySQL Documentation for complete configuration information on the parameters that follow.
MySQL my.cnf file in [mysqld]section in a master configuration
bind address must be set to the defined logical hosts ip name.
Some MySQL do not work with bind address if the name of the logical host is set. In these cases do not set the bind address parameter, or use the absolute address of the logical hostname.
socket is defined as /tmp/<logical hosts ip name>.sock.
binlog-ignore-db contains sc3_test_database, if log-bin option is being used.
MySQL my.cnf file in [mysqld]section in a slave configuration
bind address must be set to the defined logical hosts ip name.
Some MySQL builds do not work with names in the bind address parameter. In these cases do not set the bind address parameter, or use absolute IP address of the logical hostname.
socket is defined as /tmp/<logical hosts ip name>.sock.
binlog-ignore-db contains sc3_test_database if log-bin option is being used.
master—host is the hostname where the master instance resides.
master—user is the username the slave will use for identification to the master.
master—password is the password the slave will use for identification to the master.
master-info-file is the location of the file that remembers where MySQL left off on the master during the replication process. This file must be placed on a GFS/FFS.
This section contains the procedures you need to install and configure MySQL.
References will be made to certain directories for MySQL. The following list shows common pathnames for these references. Refer to Configuration Restrictions where these examples .
MySQL installed from http://www.mysql.com on a Global File System, with a MySQL instance on a Global File System (Example 1)
MySQL Basedirectory— /global/mysql
MySQL Database directory — /global/mysqldata
MySQL installed on a Local File System, with mysql instances on a Failover File System (Example 2)
MySQL Basedirectory— /usr/local/mysql
MySQL Database directory — /local/mysqldata
Determine how MySQL will be deployed in Sun Cluster –
Determine how many MySQL instances will be deployed.
Determine which Cluster File System will be used by each MySQL instance.
Determine the type of the target zone where you will install MySQL. Valid zone types are, the global zone, the failover zone, or a zone.
To install and configure MySQL in a global zone configuration, complete the following tasks:
To install and configure MySQL in a zone configuration, complete the following tasks:
To install and configure MySQL in a failover zone configuration, complete the following tasks:
You will find installation examples for each zone type in:
Appendix A, Deployment Example: Installing MySQL in the Global Zone
Appendix B, Deployment Example: Installing MySQL in the Non — Global Failover Zone
Appendix C, Deployment Example: Installing MySQL in a Non-Global Zone
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host MySQL.
Register the SUNW.gds and SUNW.HAStoragePlus resource type.
# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create a failover resource group.
# clresourcegroup create MySQL-failover-resource-group |
Create a resource for the MySQL Disk Storage.
# clresource create \ -g MySQL-failover-resource-group \ -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=MySQL- instance-mount-points MySQL-has-resource |
Create a resource for the MySQL Logical Hostname.
# clreslogicalhostname \ -g MySQL-failover-resource-group \ -h MySQL-logical-hostname \ MySQL-lh-resource |
Enable the failover resource group that now includes the MySQL Disk Storage and Logical Hostname resources.
# clresourcegroup online -M -n current-node MySQL-failover-resource-group |
Insure that you are on the node where your enabled your resource group.
Install MySQL onto all nodes within Sun Cluster
It is recommended that MySQL be installed onto a Global File System. For a discussion of the advantages and disadvantages of installing the software on local versus cluster files systems, see “Determining the Location of the Application Binaries” in the Sun Cluster Data Services Installation and Configuration Guide.
Download MySQL from http://www.mysql.com — If you intend to use local disks for the MySQL software, you will need to repeat this step on all nodes within Sun Cluster.
Create a mysql-user and mysql-group for MySQL on all nodes in the cluster that will run MySQL
Create an entry in /etc/group on all nodes with Sun Cluster.
Change owner and group for MySQL binaries
If MySQL binaries are on all nodes then repeat this step on every node.
# chown -R mysql:mysql /global/mysql |
Create your MySQL Database directory for your MySQL Instance(s).
# mkdir <MySQL Database directory> # |
Refer back to Configuration Restrictions for a description of the <MySQL Database directory> and to Installing and Configuring MySQL for a list of common pathnames.
Shows one MySQL instance. MySQL has been installed from http://www.mysql.com in /global/mysql which is mounted as a Global File System. The MySQL Database Directory for the MySQL instance is /global/mysql-data.
# cd /global/mysql # # ls -l -rw-r--r-- 1 mysql mysql 19106 Dec 10 14:52 COPYING -rw-r--r-- 1 mysql mysql 28003 Dec 10 14:52 COPYING.LIB -rw-r--r-- 1 mysql mysql 44577 Dec 5 10:37 ChangeLog -rw-r--r-- 1 mysql mysql 6811 Dec 10 14:53 INSTALL-BINARY -rw-r--r-- 1 mysql mysql 1976 Dec 5 10:37 README drwxr-xr-x 2 mysql mysql 1024 Dec 13 18:05 bin -rwxr-xr-x 1 mysql mysql 773 Dec 10 15:34 configure drwxr-x--- 3 mysql mysql 512 Apr 3 12:23 data drwxr-xr-x 2 mysql mysql 1024 Dec 10 15:35 include drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 lib drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 man -rw-r--r-- 1 mysql mysql 2582089 Dec 10 14:47 manual.html -rw-r--r-- 1 mysql mysql 2239278 Dec 10 14:47 manual.txt -rw-r--r-- 1 mysql mysql 94600 Dec 10 14:47 manual_toc.html drwxr-xr-x 6 mysql mysql 512 Dec 10 15:35 mysql-test drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 scripts drwxr-xr-x 3 mysql mysql 512 Dec 10 15:35 share drwxr-xr-x 7 mysql mysql 1024 Dec 10 15:35 sql-bench drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 support-files drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 tests # |
Create the MySQL my.cnf file according to your requirements — The Sun Cluster HA for MySQL data service provides two sample my.cnf file for MySQL: one sample configuration file is for a master configuration and one sample file is for a slave configuration.
If the Sun Cluster HA for MySQL package (SUNWscmys) was not installed during your initial Sun Cluster installation, proceed to Installing the Sun Cluster HA for MySQL Packages to install it on your cluster. Return here to continue the Installation and Configuration of MySQL.
The contents of /opt/SUNWscmys/etc/my.cnf_sample_[master|slave]provide a sample MySQL configuration file that you can use to create your MySQL instance <MySQL Databasedirectory>/my.cnf. You must still edit that file to reflect your configuration values.
# cp /opt/SUNWscmys/etc/my.cnf_sample_master \ <MySQL Databasedirectory>/my.cnf |
Bootstrap MySQL instance — This will create the privilege tables db, host, user, tables_priv and columns_priv in the mysql database, as well as the func table.
# cd <MySQL Basedirectory> |
# ./scripts/mysql_install_db \ --datadir=<MySQL Database directory> |
Create a logfile directory in <MySQL Database Directory>
# mkdir <MySQL Database Directory>/logs |
Change owner and group for <MySQL Database Directory>
# chown -R mysql:mysql <MySQL Database Directory> |
Change file permission for <MySQL Database Directory>/my.cnf
# chmod 644 <MySQL Database Directory>/my.cnf |
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host MySQL.
Create and boot your zone MySQL-zone on all the nodes to host your MySQL data base.
Register the SUNW.gds and SUNW.HAStoragePlus resource type.
# clresourcetype register SUNW.gds SUNW.HAStoragePlus |
Create a failover resource group.
# clresourcegroup create \ -n node1:MySQL-zone,node2:MySQL-zone \ MySQL-failover-resource-group |
Create a resource for the MySQL Disk Storage.
# clresource create \ -g MySQL-failover-resource-group \ -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=MySQL- instance-mount-points MySQL-has-resource |
Create a resource for the MySQL Logical Hostname.
# clreslogicalhostname \ -g MySQL-failover-resource-group \ -h MySQL-logical-hostname \ MySQL-lh-resource |
Enable the failover resource group that now includes the MySQL Disk Storage and Logical Hostname resources.
# clresourcegroup online -M -n current-node MySQL-failover-resource-group |
Insure that you are on the node where your enabled your resource group.
Log in to your zone
# zlogin MySQL-zone |
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host MySQL.
Install MySQL onto all nodes within Sun Cluster
It is recommended that MySQL be installed onto a Global File System. For a discussion of the advantages and disadvantages of installing the software on local versus cluster files systems, see “Determining the Location of the Application Binaries” in the Sun Cluster Data Services Installation and Configuration Guide.
Download MySQL from http://www.mysql.com — If you intend to use local disks for the MySQL software, you will need to repeat this step on all nodes within Sun Cluster.
Create a mysql-user and mysql-group for MySQL on all nodes:zones in the cluster that will run MySQL
Create an entry in /etc/group on all nodes with Sun Cluster.
Change owner and group for MySQL binaries
If MySQL binaries are on all nodes then repeat this step on every node.
# chown -R mysql:mysql /global/mysql |
Create your MySQL Database directory for your MySQL Instance(s).
# mkdir <MySQL Database directory> # |
Refer back to Configuration Restrictions for a description of the <MySQL Database directory> and to Installing and Configuring MySQL for a list of common pathnames.
Shows one MySQL instance. MySQL has been installed from http://www.mysql.com in /global/mysql which is mounted as a Global File System. The MySQL Database Directory for the MySQL instance is /global/mysql-data.
# cd /global/mysql # # ls -l -rw-r--r-- 1 mysql mysql 19106 Dec 10 14:52 COPYING -rw-r--r-- 1 mysql mysql 28003 Dec 10 14:52 COPYING.LIB -rw-r--r-- 1 mysql mysql 44577 Dec 5 10:37 ChangeLog -rw-r--r-- 1 mysql mysql 6811 Dec 10 14:53 INSTALL-BINARY -rw-r--r-- 1 mysql mysql 1976 Dec 5 10:37 README drwxr-xr-x 2 mysql mysql 1024 Dec 13 18:05 bin -rwxr-xr-x 1 mysql mysql 773 Dec 10 15:34 configure drwxr-x--- 3 mysql mysql 512 Apr 3 12:23 data drwxr-xr-x 2 mysql mysql 1024 Dec 10 15:35 include drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 lib drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 man -rw-r--r-- 1 mysql mysql 2582089 Dec 10 14:47 manual.html -rw-r--r-- 1 mysql mysql 2239278 Dec 10 14:47 manual.txt -rw-r--r-- 1 mysql mysql 94600 Dec 10 14:47 manual_toc.html drwxr-xr-x 6 mysql mysql 512 Dec 10 15:35 mysql-test drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 scripts drwxr-xr-x 3 mysql mysql 512 Dec 10 15:35 share drwxr-xr-x 7 mysql mysql 1024 Dec 10 15:35 sql-bench drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 support-files drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 tests # |
Create the MySQL my.cnf file according to your requirements — The Sun Cluster HA for MySQL data service provides two sample my.cnf file for MySQL: one sample configuration file is for a master configuration and one sample file is for a slave configuration.
If the Sun Cluster HA for MySQL package (SUNWscmys) was not installed during your initial Sun Cluster installation, proceed to Installing the Sun Cluster HA for MySQL Packages to install it on your cluster. Return here to continue the Installation and Configuration of MySQL.
The contents of /opt/SUNWscmys/etc/my.cnf_sample_[master|slave]provide a sample MySQL configuration file that you can use to create your MySQL instance <MySQL Databasedirectory>/my.cnf. You must still edit that file to reflect your configuration values.
# cp /opt/SUNWscmys/etc/my.cnf_sample_master \ <MySQL Databasedirectory>/my.cnf |
Bootstrap MySQL instance — This will create the privilege tables db, host, user, tables_priv and columns_priv in the mysql database, as well as the func table.
# cd <MySQL Basedirectory> |
# ./scripts/mysql_install_db \ --datadir=<MySQL Database directory> |
Create a logfile directory in <MySQL Database Directory>
# mkdir <MySQL Database Directory>/logs |
Change owner and group for <MySQL Database Directory>
# chown -R mysql:mysql <MySQL Database Directory> |
Change file permission for <MySQL Database Directory>/my.cnf
# chmod 644 <MySQL Database Directory>/my.cnf |
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host MySQL.
As superuser register the SUNW.HAStoragePlus and the SUNW.gds resource types.
# clresourcetype register SUNW.HAStoragePlus SUNW.gds |
Create a failover resource group.
# clresourcegroup create MySQL-resource-group |
Create a resource for the MySQL zone`s disk storage.
# clresource create -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=MySQL-instance-mount-points \ MySQL-has-resource |
(Optional) If you want the protection against a total adapter failure for your public network, create a resource for the MySQL`s logical hostname.
# clreslogicalhostname create -g MySQL-resource-group \ -h logical-hostname \ MySQL-logical-hostname-resource-name |
Place the resource group in the managed state.
# clresourcegroup online -M MySQL-resource-group |
Install the zone.
Install the zone according to the Sun Cluster HA for Solaris Containers agent documentation, assuming that the resource name is MySQL-zone-rs and that the zone name is MySQL-zone.
Verify the zone's installation.
# zoneadm -z MySQL-zone boot # zoneadm -z MySQL-zone halt |
Register the zone's boot component.
Copy the container resource boot component configuration file.
# cp /opt/SUNWsczone/sczbt/util/sczbt_config zones-target-configuration-file |
Use a plain text editor to set the following variables:
RS=MySQL-zone-rs RG=MySQL-resource-group PARAMETERDIR=MySQL-zone-parameter-directory SC_NETWORK=true|false SC_LH=MySQL-logical-hostname-resource-name FAILOVER=true|false HAS_RS=MySQL-has-resource Zonename=MySQL-zone Zonebootopt=zone-boot-options Milestone=zone-boot-milestone Mounts=
Create the parameter directory for your zone's resource.
# mkdir MySQL-zone-parameter-directory |
Execute the Sun Cluster HA for Solaris Container's registration script.
# /opt/SUNWsczone/sczbt/util/sczbt_register -f zones-target-configuration-file |
Enable the Solaris Container resource
# clresource enable MySQL-zone-rs |
# clresourcegroup online MySQL-resource-group |
Insure that you are on the node where your enabled your resource group.
Log in to your zone
# zlogin MySQL-zone |
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on one of the nodes in the cluster that will host MySQL.
Install MySQL.
It is recommended that MySQL be installed onto a Global File System. For a discussion of the advantages and disadvantages of installing the software on local versus cluster files systems, see “Determining the Location of the Application Binaries” in the Sun Cluster Data Services Installation and Configuration Guide.
Download MySQL from http://www.mysql.com — If you intend to use local disks for the MySQL software, you will need to repeat this step on all nodes within Sun Cluster.
Create a mysql-user and mysql-group for MySQL zone that will run MySQL
Create an entry in /etc/group in the zone.
Change owner and group for MySQL binaries
# chown -R mysql:mysql /global/mysql |
Create your MySQL Database directory for your MySQL Instance(s).
# mkdir <MySQL Database directory> # |
Refer back to Configuration Restrictions for a description of the <MySQL Database directory> and to Installing and Configuring MySQL for a list of common pathnames.
Shows one MySQL instance. MySQL has been installed from http://www.mysql.com in /global/mysql which is mounted as a Global File System. The MySQL Database Directory for the MySQL instance is /global/mysql-data.
# cd /global/mysql # # ls -l -rw-r--r-- 1 mysql mysql 19106 Dec 10 14:52 COPYING -rw-r--r-- 1 mysql mysql 28003 Dec 10 14:52 COPYING.LIB -rw-r--r-- 1 mysql mysql 44577 Dec 5 10:37 ChangeLog -rw-r--r-- 1 mysql mysql 6811 Dec 10 14:53 INSTALL-BINARY -rw-r--r-- 1 mysql mysql 1976 Dec 5 10:37 README drwxr-xr-x 2 mysql mysql 1024 Dec 13 18:05 bin -rwxr-xr-x 1 mysql mysql 773 Dec 10 15:34 configure drwxr-x--- 3 mysql mysql 512 Apr 3 12:23 data drwxr-xr-x 2 mysql mysql 1024 Dec 10 15:35 include drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 lib drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 man -rw-r--r-- 1 mysql mysql 2582089 Dec 10 14:47 manual.html -rw-r--r-- 1 mysql mysql 2239278 Dec 10 14:47 manual.txt -rw-r--r-- 1 mysql mysql 94600 Dec 10 14:47 manual_toc.html drwxr-xr-x 6 mysql mysql 512 Dec 10 15:35 mysql-test drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 scripts drwxr-xr-x 3 mysql mysql 512 Dec 10 15:35 share drwxr-xr-x 7 mysql mysql 1024 Dec 10 15:35 sql-bench drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 support-files drwxr-xr-x 2 mysql mysql 512 Dec 10 15:35 tests # |
Create the MySQL my.cnf file according to your requirements — The Sun Cluster HA for MySQL data service provides two sample my.cnf file for MySQL: one sample configuration file is for a master configuration and one sample file is for a slave configuration.
If the Sun Cluster HA for MySQL package (SUNWscmys) was not installed during your initial Sun Cluster installation, proceed to Installing the Sun Cluster HA for MySQL Packages to install it on your cluster. Return here to continue the Installation and Configuration of MySQL.
The contents of /opt/SUNWscmys/etc/my.cnf_sample_[master|slave]provide a sample MySQL configuration file that you can use to create your MySQL instance <MySQL Databasedirectory>/my.cnf. You must still edit that file to reflect your configuration values.
# cp /opt/SUNWscmys/etc/my.cnf_sample_master \ <MySQL Databasedirectory>/my.cnf |
Bootstrap MySQL instance — This will create the privilege tables db, host, user, tables_priv and columns_priv in the mysql database, as well as the func table.
# cd <MySQL Basedirectory> |
# ./scripts/mysql_install_db \ --datadir=<MySQL Database directory> |
Create a logfile directory in <MySQL Database Directory>
# mkdir <MySQL Database Directory>/logs |
Change owner and group for <MySQL Database Directory>
# chown -R mysql:mysql <MySQL Database Directory> |
Change file permission for <MySQL Database Directory>/my.cnf
# chmod 644 <MySQL Database Directory>/my.cnf |
This section contains the procedure you need to verify the installation and configuration.
This procedure does not verify that your application is highly available because you have not yet installed your data service.
Before verifying the installation and configuration of MySQL, ensure that the Logical Hostname for the mysql is available. You will need to complete the steps 1 to 6 the task How to Register and Configure Sun Cluster HA for MySQL as a Failover Service in a Global Zone Configuration, the steps 1 to 7 in the task How to Register and Configure Sun Cluster HA for MySQL as a Failover Service in a Failover Zone Configurationor the step 1 to 7 in the task How to Register and Configure Sun Cluster HA for MySQL as a Failover Service in a Zone Configuration according to your zone type.
(Optional) Log in to your target zone.
# zlogin mysql-zone |
Start the MySQL Server for this instance.
#cd <MySQL Basedirectory> |
# ./bin/mysqld --defaults-file=<MySQL Databasedirectry>/my.cnf \ --basedir=<MySQL Basedirectory>\ --datadir=<MySQL Databasedirectory>\ --user=mysql --pid-file=<MySQL Databasedirectory>/mysqld.pid & |
Connect to the MySQL instance.
# <MySQL Basedirectory>/bin/mysql -h <Logical host> -uroot |
Stop the MySQL server instance.
# kill -TERM `cat <MySQL Databasedirectory>/mysqld.pid |
(Optional) Leave the target zone.
If you did not install the Sun Cluster HA for MySQL packages during your initial Sun Cluster installation, perform this procedure to install the packages. To install the packages, use the Sun JavaTM Enterprise System Installation Wizard.
Perform this procedure on each cluster node where you are installing the Sun Cluster HA for MySQL packages.
You can run the Sun Java Enterprise System Installation Wizard with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar.
Even if you plan to configure this data service to run in non-global zones, install the packages for this data service in the global zone. The packages are propagated to any existing non-global zones and to any non-global zones that are created after you install the packages.
Ensure that you have the Sun Java Availability Suite DVD-ROM.
If you intend to run the Sun Java Enterprise System Installation Wizard with a GUI, ensure that your DISPLAY environment variable is set.
On the cluster node where you are installing the data service packages, become superuser.
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the Volume Management daemon vold(1M) is running and configured to manage DVD-ROM devices, the daemon automatically mounts the DVD-ROM on the /cdrom directory.
Change to the Sun Java Enterprise System Installation Wizard directory of the DVD-ROM.
Start the Sun Java Enterprise System Installation Wizard.
# ./installer |
When you are prompted, accept the license agreement.
If any Sun Java Enterprise System components are installed, you are prompted to select whether to upgrade the components or install new software.
From the list of Sun Cluster agents under Availability Services, select the data service for MySQL.
If you require support for languages other than English, select the option to install multilingual packages.
English language support is always installed.
When prompted whether to configure the data service now or later, choose Configure Later.
Choose Configure Later to perform the configuration after the installation.
Follow the instructions on the screen to install the data service packages on the node.
The Sun Java Enterprise System Installation Wizard displays the status of the installation. When the installation is complete, the wizard displays an installation summary and the installation logs.
(GUI only) If you do not want to register the product and receive product updates, deselect the Product Registration option.
The Product Registration option is not available with the CLI. If you are running the Sun Java Enterprise System Installation Wizard with the CLI, omit this step
Exit the Sun Java Enterprise System Installation Wizard.
Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.
See Registering and Configuring Sun Cluster HA for MySQLto register Sun Cluster HA for MySQL and to configure the cluster for the data service.
This section contains the procedures you need to configure Sun Cluster HA for MySQL. According to your zone type yo need to complete either one of the following tasks.
How to Register and Configure Sun Cluster HA for MySQL as a Failover Service in a Zone Configuration
This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.
If you did not install the Sun Cluster HA for MySQL packages as part of your initial Sun Cluster installation, go to Installing the Sun Cluster HA for MySQL Packages.
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on the node in the cluster that host MySQL.
Start the MySQL Server instance manually.
#cd <MySQL Basedirectory> |
# ./bin/mysqld --defaults-file=<MySQL Databasedirectry>/my.cnf \ --basedir=<MySQL. Basedirectory>\ --datadir=<MySQL Databasedirectory>\ --user=mysql\ --pid-file=<MySQL Databasedirectory>/mysqld.pid & |
Configure the admin password for the adminuser.
# <MySQL Databasedirectry>/bin/mysqladmin \ -S /tmp/<Logical host>.sock password 'admin password' |
Copy the MySQL configuration files to your private place.
# cp /opt/SUNWscmys/util/ha_mysql_config /my-place # cp /opt/SUNWscmys/util/mysql_config /my-place |
Create a faultmonitor-user and a test-database for the MySQL instance.
# cd my-place |
Edit the mysql_config file and follow the comments within that file:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by mysql_register and the parameters # listed below will be used. # # Where is mysql installed (BASEDIR) MYSQL_BASE= # Mysql admin-user for localhost (Default is root) MYSQL_USER= # Password for mysql admin user MYSQL_PASSWD= # Configured logicalhost MYSQL_HOST= # Specify a username for a faultmonitor user FMUSER= # Pick a password for that faultmonitor user FMPASS= # Socket name for mysqld ( Should be /tmp/<logical-host>.sock ) MYSQL_SOCK= # FOR SC3.1 ONLY, Specify the physical hostname for the # physical NIC that this logicalhostname belongs to for every node in the # cluster this Resourcegroup can located on. # IE: The logicalhost lh1 belongs to hme1 for physical-node phys-1 and # hme3 for physical-node phys-2. The hostname for hme1 is phys-1-hme0 and # for hme3 on phys-2 it is phys-2-hme3. # IE: MYSQL_NIC_HOSTNAME="phys-1-hme0 phys-2-hme3" MYSQL_NIC_HOSTNAME= |
The following is an example for MySQL instance on SC3.2.
MYSQL_BASE=/global/mysql MYSQL_USER=root MYSQL_PASSWD=root MYSQL_HOST=hahostix1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/hahostix1.sock MYSQL_NIC_HOSTNAME="clusterix1 clusterix2" |
After editing mysql_config you must run the mysql_register script.
# /opt/SUNWscmys/util//mysql_register -f my-place/mysql_config |
Stop the MySQL Server instance manually.
# kill -TERM `cat <MySQL Databasedirectry>/mysqld.pid |
Create and register MySQL as a failover data service.
# cd my-place |
Edit the ha_mysql_config file and follow the comments within that file, i.e.
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)ha_mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by ha_mysql_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # # To have the mysql agent local zone aware, 4 Variables are needed: # ZONE - the zone name where the Mysql Database should run in # Optional # ZONEBT - The resource name which controls the zone. # Optional # PROJECT - A project in the zone, that will be used for this service # specify it if you have an su - in the start stop or probe, # or to define the smf credentials. If the variable is not set, # it will be translated as :default for the sm and default # for the zsh component # Optional # ZUSER - A user in the the zone which is used for the smf method # credentials. Yur smf servic e will run under this user # Optional # # Mysql specific Variables # # BASEDIR - name of the Mysql bin directory # DATADIR - name of the Mysql Data directory # MYSQLUSER - name of the user Mysql should be started of # LH - name of the LogicalHostname SC resource # MYSQLHOST - name of the host in /etc/hosts # FMUSER - name of the Mysql fault monitor user # FMPASS - name of the Mysql fault monitor user password # LOGDIR - name of the directory mysqld should store it's logfile. # CHECK - should HA-MySQL check MyISAM index files before start YES/NO. # HAS_RS - name of the MySQL HAStoragePlus SC resource # |
The following is an example for MySQL instance.
RS=mysql-res RG=mysql-rg BASEDIR=/global/mysql DATADIR=/global/mysql-data MYSQLUSER=mysql LH=hahostix1 MYSQLHOST=hahostix1 FMUSER=fmuser FMPASS=fmuser LOGDIR=/global/mysql-data/logs CHECK=YES HAS_RS=mysql-has-res ZONE= ZONE_BT= PROJECT= |
Register the MySQL resource.
# /opt/SUNWscmys/util/ha_mysql_register -f my-place/ha_mysql_config |
Enable each MySQL resource.
Repeat this step for each MySQL instance, if multiple instances were created.
# clresource status # clresource enable MySQL-resource |
Add adminuser for accessing locally a MySQL instance with a MySQL Logicalhost ip name.
If you want to access the MySQL instance only through the socket (localhost), omit this step.
When bootstrapping MySQL the command mysql_install_db will create two adminusers, one belonging to localhost and one belonging to the node on which mysql_install_db was executed.
Add a adminuser for every physical-node in the cluster that will run this MySQL instance.
If the nodename and the hostname for the physical interface are different, use the hostname for the physical interface.
The following is an example for MySQL instance on SC3.2.
# mysql -S /tmp/hahostix1.sock -uroot mysql> use mysql; mysql> mysql> GRANT ALL ON *.* TO 'root'@'clusterix2' IDENTIFIED BY 'rootpasswd'; mysql> mysql> GRANT ALL ON *.* TO 'root'@'clusterix1' IDENTIFIED BY 'rootpasswd'; mysql> exit; |
You have to manually add Grant_priv to the admin users. See MySQL Administration documentation.
This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.
If you did not install the Sun Cluster HA for MySQL packages as part of your initial Sun Cluster installation, go to Installing the Sun Cluster HA for MySQL Packages.
Log in to the zone on the node that hosts yourMySQLresource-group
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on the nodes zone in the cluster that host MySQL.
Start the MySQL Server instance manually.
# cd <MySQL Basedirectory> |
# ./bin/mysqld --defaults-file=<MySQL Databasedirectry>/my.cnf \ --basedir=<MySQL. Basedirectory>\ --datadir=<MySQL Databasedirectory>\ --user=mysql\ --pid-file=<MySQL Databasedirectory>/mysqld.pid & |
Configure the admin password for the adminuser.
# <MySQL Databasedirectry>/bin/mysqladmin \ -S /tmp/<Logical host>.sock password 'admin password' |
Copy the MySQL configuration file to your private place.
# cp /opt/SUNWscmys/util/mysql_config /my-place |
Create a faultmonitor-user and a test-database for the MySQL instance.
# cd my-place |
Edit the mysql_config file and follow the comments within that file:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by mysql_register and the parameters # listed below will be used. # # Where is mysql installed (BASEDIR) MYSQL_BASE= # Mysql admin-user for localhost (Default is root) MYSQL_USER= # Password for mysql admin user MYSQL_PASSWD= # Configured logicalhost MYSQL_HOST= # Specify a username for a faultmonitor user FMUSER= # Pick a password for that faultmonitor user FMPASS= # Socket name for mysqld ( Should be /tmp/<logical-host>.sock ) MYSQL_SOCK= # FOR SC3.1 ONLY, Specify the physical hostname for the # physical NIC that this logicalhostname belongs to for every node in the # cluster this Resourcegroup can located on. # IE: The logicalhost lh1 belongs to hme1 for physical-node phys-1 and # hme3 for physical-node phys-2. The hostname for hme1 is phys-1-hme0 and # for hme3 on phys-2 it is phys-2-hme3. # IE: MYSQL_NIC_HOSTNAME="phys-1-hme0 phys-2-hme3" MYSQL_NIC_HOSTNAME= |
The following is an example for MySQL instance on SC3.2.
MYSQL_BASE=/global/mysql MYSQL_USER=root MYSQL_PASSWD=root MYSQL_HOST=hahostix1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/hahostix1.sock MYSQL_NIC_HOSTNAME="zone1 zone2" |
After editing mysql_config you must run the mysql_register script.
# /opt/SUNWscmys/util//mysql_register -f my-place/mysql_config |
Stop the MySQL Server instance manually.
# kill -TERM `cat <MySQL Databasedirectry>/mysqld.pid |
Leave the zone and become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on the nodes global zone in the cluster that host MySQL.
Copy the MySQL configuration file to your private place.
# cp /opt/SUNWscmys/util/ha_mysql_config /my-place |
Create and register MySQL as a failover data service.
# cd my-place |
Edit the ha_mysql_config file and follow the comments within that file, i.e.
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)ha_mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by ha_mysql_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # # To have the mysql agent local zone aware, 4 Variables are needed: # ZONE - the zone name where the Mysql Database should run in # Optional # ZONEBT - The resource name which controls the zone. # Optional # PROJECT - A project in the zone, that will be used for this service # specify it if you have an su - in the start stop or probe, # or to define the smf credentials. If the variable is not set, # it will be translated as :default for the sm and default # for the zsh component # Optional # ZUSER - A user in the the zone which is used for the smf method # credentials. Yur smf servic e will run under this user # Optional # # Mysql specific Variables # # BASEDIR - name of the Mysql bin directory # DATADIR - name of the Mysql Data directory # MYSQLUSER - name of the user Mysql should be started of # LH - name of the LogicalHostname SC resource # MYSQLHOST - name of the host in /etc/hosts # FMUSER - name of the Mysql fault monitor user # FMPASS - name of the Mysql fault monitor user password # LOGDIR - name of the directory mysqld should store it's logfile. # CHECK - should HA-MySQL check MyISAM index files before start YES/NO. # HAS_RS - name of the MySQL HAStoragePlus SC resource # |
The following is an example for MySQL instance.
RS=mysql-res RG=mysql-rg BASEDIR=/global/mysql DATADIR=/global/mysql-data MYSQLUSER=mysql LH=hahostix1 MYSQLHOST=hahostix1 FMUSER=fmuser FMPASS=fmuser LOGDIR=/global/mysql-data/logs CHECK=YES HAS_RS=mysql-has-res ZONE= ZONE_BT= PROJECT= |
Register the MySQL resource.
# /opt/SUNWscmys/util/ha_mysql_register -f my-place/ha_mysql_config |
Enable each MySQL resource.
Repeat this step for each MySQL instance, if multiple instances were created.
# clresource status # clresource enable MySQL-resource |
Add adminuser for accessing locally a MySQL instance with a MySQL Logicalhost ip name.
If you want to access the MySQL instance only through the socket (localhost), omit this step.
When bootstrapping MySQL the command mysql_install_db will create two adminusers, one belonging to localhost and one belonging to the node on which mysql_install_db was executed.
Add a adminuser for every physical-node in the cluster that will run this MySQL instance.
If the nodename and the hostname for the physical interface are different, use the hostname for the physical interface.
The following is an example for MySQL instance on SC3.2.
# mysql -S /tmp/hahostix1.sock -uroot mysql> use mysql; mysql> mysql> GRANT ALL ON *.* TO 'root'@'zone1' IDENTIFIED BY 'rootpasswd'; mysql> mysql> GRANT ALL ON *.* TO 'root'@'zone2' IDENTIFIED BY 'rootpasswd'; mysql> exit; |
You have to manually add Grant_priv to the admin users. See MySQL Administration documentation.
This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.
If you did not install the Sun Cluster HA for MySQL packages as part of your initial Sun Cluster installation, go to Installing the Sun Cluster HA for MySQL Packages.
Log in to the zone on the node that hosts yourMySQLresource-group
Become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on the nodes zone in the cluster that host MySQL.
Start the MySQL Server instance manually.
# cd <MySQL Basedirectory> |
# ./bin/mysqld --defaults-file=<MySQL Databasedirectry>/my.cnf \ --basedir=<MySQL. Basedirectory>\ --datadir=<MySQL Databasedirectory>\ --user=mysql\ --pid-file=<MySQL Databasedirectory>/mysqld.pid & |
Configure the admin password for the adminuser.
# <MySQL Databasedirectry>/bin/mysqladmin \ -S /tmp/<Logical host>.sock password 'admin password' |
Copy the MySQL configuration file to your private place.
# cp /opt/SUNWscmys/util/mysql_config /my-place |
Create a faultmonitor-user and a test-database for the MySQL instance.
# cd my-place |
Edit the mysql_config file and follow the comments within that file:
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by mysql_register and the parameters # listed below will be used. # # Where is mysql installed (BASEDIR) MYSQL_BASE= # Mysql admin-user for localhost (Default is root) MYSQL_USER= # Password for mysql admin user MYSQL_PASSWD= # Configured logicalhost MYSQL_HOST= # Specify a username for a faultmonitor user FMUSER= # Pick a password for that faultmonitor user FMPASS= # Socket name for mysqld ( Should be /tmp/<logical-host>.sock ) MYSQL_SOCK= # FOR SC3.1 ONLY, Specify the physical hostname for the # physical NIC that this logicalhostname belongs to for every node in the # cluster this Resourcegroup can located on. # IE: The logicalhost lh1 belongs to hme1 for physical-node phys-1 and # hme3 for physical-node phys-2. The hostname for hme1 is phys-1-hme0 and # for hme3 on phys-2 it is phys-2-hme3. # IE: MYSQL_NIC_HOSTNAME="phys-1-hme0 phys-2-hme3" MYSQL_NIC_HOSTNAME= |
The following is an example for MySQL instance on SC3.2.
MYSQL_BASE=/global/mysql MYSQL_USER=root MYSQL_PASSWD=root MYSQL_HOST=hahostix1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/hahostix1.sock MYSQL_NIC_HOSTNAME="zone1" |
After editing mysql_config you must run the mysql_register script.
# /opt/SUNWscmys/util//mysql_register -f my-place/mysql_config |
Stop the MySQL Server instance manually.
# kill -TERM `cat <MySQL Databasedirectry>/mysqld.pid |
Leave the zone and become superuser or assume a role that provides solaris.cluster.verb RBAC authorization on the nodes global zone in the cluster that host MySQL.
Copy the MySQL configuration file to your private place.
# cp /opt/SUNWscmys/util/ha_mysql_config /my-place |
Create and register MySQL as a failover data service.
# cd my-place |
Edit the ha_mysql_config file and follow the comments within that file, i.e.
# # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #ident "@(#)ha_mysql_config.ksh 1.3 06/03/08 SMI" # This file will be sourced in by ha_mysql_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # # To have the mysql agent local zone aware, 4 Variables are needed: # ZONE - the zone name where the Mysql Database should run in # Optional # ZONEBT - The resource name which controls the zone. # Optional # PROJECT - A project in the zone, that will be used for this service # specify it if you have an su - in the start stop or probe, # or to define the smf credentials. If the variable is not set, # it will be translated as :default for the sm and default # for the zsh component # Optional # ZUSER - A user in the the zone which is used for the smf method # credentials. Yur smf servic e will run under this user # Optional # # Mysql specific Variables # # BASEDIR - name of the Mysql bin directory # DATADIR - name of the Mysql Data directory # MYSQLUSER - name of the user Mysql should be started of # LH - name of the LogicalHostname SC resource # MYSQLHOST - name of the host in /etc/hosts # FMUSER - name of the Mysql fault monitor user # FMPASS - name of the Mysql fault monitor user password # LOGDIR - name of the directory mysqld should store it's logfile. # CHECK - should HA-MySQL check MyISAM index files before start YES/NO. # HAS_RS - name of the MySQL HAStoragePlus SC resource # |
The following is an example for MySQL instance.
RS=mysql-res RG=mysql-rg BASEDIR=/global/mysql DATADIR=/global/mysql-data MYSQLUSER=mysql LH=hahostix1 MYSQLHOST=hahostix1 FMUSER=fmuser FMPASS=fmuser LOGDIR=/global/mysql-data/logs CHECK=YES HAS_RS=mysql-has-res ZONE=zone1 ZONE_BT=zone1-rs PROJECT=MySQL-project |
Register the MySQL resource.
# /opt/SUNWscmys/util/ha_mysql_register -f my-place/ha_mysql_config |
Enable each MySQL resource.
Repeat this step for each MySQL instance, if multiple instances were created.
# clresource status # clresource enable MySQL-resource |
Add adminuser for accessing locally a MySQL instance with a MySQL Logicalhost ip name.
If you want to access the MySQL instance only through the socket (localhost), omit this step.
When bootstrapping MySQL the command mysql_install_db will create two adminusers, one belonging to localhost and one belonging to the node on which mysql_install_db was executed.
Add a adminuser for every physical-node in the cluster that will run this MySQL instance.
If the nodename and the hostname for the physical interface are different, use the hostname for the physical interface.
The following is an example for MySQL instance on SC3.2.
# mysql -S /tmp/hahostix1.sock -uroot mysql> use mysql; mysql> mysql> GRANT ALL ON *.* TO 'root'@'zone1' IDENTIFIED BY 'rootpasswd'; mysql> exit; |
You have to manually add Grant_priv to the admin users. See MySQL Administration documentation.
Perform this task to change parameters in the Sun Cluster HA for MySQL manifest and to validate the parameters in the failover zone. Parameters for the Sun Cluster HA for MySQL manifest are stored as properties of the SMF service. To modify parameters in the manifest, change the related properties in the SMF service then validate the parameter changes.
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations on the zones console.
Change the Solaris Service Managemet Facility (SMF) properties for the Sun Cluster HA for MySQL manifest.
# svccfg svc:/application/sczone-agents:resource |
For more information, see the svccfg(1M) man page.
Validate the parameter changes.
# /opt/SUNWsctomcat/bin/control_mysql validate resource |
Messages for this command are stored in the /var/adm/messages/ directory of the failover zone.
Disconnect from the failover zone's console.
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations.
Disable and remove the resource that is used by the Sun Cluster HA for MySQL data service.
# clresource disable resource # clresource delete resource |
Log in as superuser to the failover zone's console.
Unregister Sun Cluster HA for MySQL from the Solaris Service Managemet Facility (SMF) service.
# /opt/SUNWscmys/util/ha_mysql_smf_remove -f filename |
Specifies the configuration file name.
The name of the configuration file that you used to register Sun Cluster HA for MySQL with the SMF service.
If you no longer have the configuration file that you used to register Sun Cluster HA for MySQL with the SMF service, create a replacement configuration file:
Make a copy of the default file, /opt/SUNWscmys/util/ha_mysql_config.
Set the ZONE and RS parameters with the values that are used by the data service.
Run the ha_mysql_smf_remove command and use the -f option to specify this configuration file.
Disconnect from the failover zone's console.
This section contains the procedure you need to verify that you installed and configured your data service correctly.
Become superuser on one of the nodes in the cluster that will host MySQL.
Ensure all the MySQL resources are online with scstat.
# cluster status |
For each MySQL resource that is not online, use the clresource command as follows.
# clresource enable MySQL- resouce |
Run the clresourcegroup command to switch the MySQL resource group to another cluster node, such as node2.
# clresourcegroup -h node2 MySQL-failover-resource-group |
This section describes the Sun Cluster HA for MySQL fault monitor's probing algorithm or functionality, states the conditions, messages, and recovery actions associated with unsuccessful probing.
For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.
The Sun Cluster HA for MySQL fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.
MySQL probing steps.
Sleeps for Thorough_probe_interval.
Tries to connect to the MySQL instance, with mysqladmin command with argument ping, with the defined faultmonitor user fault <fmuser>. If this fails, then the probe will restart the MySQL resource.
Every 300 seconds the probe will also check the following:
If the MySQL instance is a slave configuration, the probe will check whether the MySQL instance is connected to its master. If the Slave is not connected the probe will write an error message to syslog.
Verify that the probe can list all databases and tables, not the contents. If the probe receive any errors the probe will write an error message to syslog.
Conduct an function test on the defined test-database, Create Table, Insert into Table, Update Table, Delete from Table and Drop Table. If any of those operations fails then the probe will restart the MySQL resource.
If all MySQL processes have died, pmf will interrupt the probe to immediately restart the mysql resource.
If the mysql resource is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval then a failover is initiated for the Resource Group onto another node if Failover_enabled is set to TRUE.
Sun Cluster HA for MySQL has a file named config that enables you to activate debugging for MySQLresources. This file is in the /opt/SUNWscmys/etc directory.
Determine whether you are in a global zone or in a failover zone configuration.
If your operating system is Solaris 10 and your MySQL resource is dependent on a Solaris Container boot component resource, you are in a failover zone configuration. In any other case, especially on a Solaris 9 system , you are in a global zone configuration.
Determine whether debugging for Sun Cluster HA for MySQL is active.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
If debugging is inactive, daemon.notice is set in the file /etc/syslog.conf of the appropriate zone.
If debugging is inactive, edit the /etc/syslog.conf file in the appropriate zone to change daemon.notice to daemon.debug.
Confirm that debugging for Sun Cluster HA for MySQL is active.
If debugging is active, daemon.debug is set in the file /etc/syslog.conf.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Restart the syslogd daemon in the appropriate zone.
If your operating system is Solaris 9, type:
# pkill -1 syslogd |
If your operating system is Solaris 10, type:
# svcadm refresh svc:/system/system-log:default |
Edit the /opt/SUNWsczone/sczbt/etc/config file to change the DEBUG= variable according to one of the examples:
DEBUG=ALL
DEBUG=resource name
DEBUG=resource name,resource name, ...
# cat /opt/SUNWscmys/etc/config # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # DEBUG=ALL # |
To deactivate debugging, repeat step 1 to 6, changing daemon.debug to daemon.notice and changing the DEBUG variable to DEBUG=.
Use the information in this section to understand how to upgrade to SC3.2 when using Sun Cluster HA for MySQL.
This procedure will not describe how to upgrade to SC3.2. It includes only the steps to upgrade Sun Cluster HA for MySQL to SC3.2.
This procedure shows the steps how to upgrade Sun Cluster HA for MySQL from SC3.0 to SC3.2.
Shutdown Sun Cluster HA for MySQL resource with clresource disable MySQL- resouce.
# clresource disableMySQL- resouce |
Upgrade the nodes to SC3.2 according to Sun Cluster documentation.
Start the MySQL Server manually on SC3.2.
#cd <MySQL Basedirectory> |
# ./bin/mysqld --defaults-file=<MySQL Databasedirectry>/my.cnf \ --basedir=<MySQL Basedirectory>\ --datadir=<MySQL Databasedirectory>\ --user=mysql\ --pid-file=<MySQL Databasedirectory>/mysqld.pid & |
Access the MySQL instance from local node with the socket option.
#<MySQL Basedirectory> /bin/mysql -S <MySQL Socket> -uroot \ -p<Adminpassword>\ |
The following is an example for a MySQL instance.
# mysql -s /tmp/hahostix1.sock -uroot -proot mysql> |
Drop the Sun Cluster HA for MySQL test database sc3_test_database.
# mysql -s /tmp/hahostix1.sock -uroot -proot mysql> DROP DATABASE sc3_test_database; Query OK, 0 rows affected (0.03 sec) |
Delete all entries in db-table in mysql-database that contain User='<MySQL Faultmonitor user>'.
# mysql -s /tmp/hahostix1.sock -uroot -proot mysql> use mysql; Database changed mysql> DELETE FROM db WHERE User='fmuser'; Query OK, 1 row affected (0.03 sec) |
Delete all entries in user-table in mysql-database that contain User='<MySQL Faultmonitor user>'.
# mysql -s /tmp/hahostix1.sock -uroot -proot mysql> use mysql; Database changed mysql> DELETE FROM user WHERE User='fmuser'; Query OK, 1 row affected (0.03 sec) |
Add faultmonitor-user and a test-database to MySQL.
# cd /opt/SUNWscmys/util |
Copy the mysql_config to myplace and edit the mysql_config file and follow the comments within that file:
# Where is mysql installed (BASEDIR) MYSQL_BASE= # Mysql admin-user for localhost (Should be root) MYSQL_USER= # Password for mysql admin user MYSQL_PASSWD= # Configured logicalhost MYSQL_HOST= # Specify a username for a faultmonitor user FMUSER= # Pick a password for that faultmonitor user FMPASS= # Socket name for mysqld ( Should be /tmp/<Logical host>.sock ) MYSQL_SOCK= # FOR SC3.1 ONLY, Specify the physical hostname for the # physical NIC that this logicalhostname belongs to for every node in the # cluster this Resourcegroup can located on. # IE: The logicalhost lh1 belongs to hme1 for physical-node phys-1 and # hme3 for physical-node phys-2. The hostname for hme1 is phys-1-hme1 and # for hme3 on phys-2 it is phys-2-hme3. # IE: MYSQL_NIC_HOSTNAME="phys-1-hme1 phys-2-hme3" MYSQL_NIC_HOSTNAME="" |
The following is an example for MySQL instance on SC3.2.
MYSQL_BASE=/global/mysql MYSQL_USER=root MYSQL_PASSWD=root MYSQL_HOST=hahostix1 FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/hahostix1.sock MYSQL_NIC_HOSTNAME="clusterix1 clusterix2" |
After editing mysql_config, run the mysql_register script.
# ./mysql_register -f myplace/mysql_config |
Stop the MySQL Server manually.
# kill -TERM `cat <MySQL Databasedirectry>/mysqld.pid |
Startup Sun Cluster HA for MySQL resource with clresource enable MySQL- resouce
# clresource enable MySQL- resouce |
Change source addresses for adminuser for accessing locally a MySQL instance with a MySQL Logicalhost.
If you want to access the MySQL instance only through the socket (localhost), omit this step.
When SC3.0U3 has been used, delete the root user belonging to the logical host and add an adminuser belonging to the physical host.
If the nodename and the hostname for the physical interface are different, use the hostname for the physical interface.
The following is an example for MySQL instance on SC3.2.
# mysql -S /tmp/hahostix1.sock -uroot mysql> use mysql; mysql> DELETE FROM user where User='root' AND Host='hahostix1'; mysql> GRANT ALL ON *.* TO 'root'@'clusterix1' IDENTIFIED BY 'rootpasswd'; mysql> GRANT ALL ON *.* TO 'root'@'clusterix2' IDENTIFIED BY 'rootpasswd'; mysql> exit; |
You have to manually add the Grant_priv to the admin users, See MySQL Administration documentation.
Use the information in this section to understand how to upgrade to 4.0.12 when using Sun Cluster HA for MySQL.
The procedure will not describe how to upgrade to 4.x.x. It includes only the steps to upgrade Sun Cluster HA for MySQL to 4.x.x. These steps assume that the new MySQL binaries is installed in the same place. If the new binaries is installed in a new directory, you must reregister the MySQL resource with new MySQL Basedir.
Procedure to upgrade to MySQL 4.x.x from 3.23.54
Shutdown the Sun Cluster HA for MySQL with clresource disable MySQL- resouce
# clresource disable MySQL- resouce |
Install the new MySQL binaries, follow Step 3 and 4 in Section How to Install and Configure MySQL.
Startup Sun Cluster HA for MySQL with clresource enableMySQL- resouce.
# clresource enable MySQL- resouce |
Shutdown Sun Cluster HA for MySQL faultmonitor with clresource unmonitorMySQL- resouce.
# clresource unmonitor MySQL- resouce |
Follow the MySQL Documentation to upgrade MySQL Database.
Startup Sun Cluster HA for MySQL faultmonitor with clresource monitorMySQL- resouce.
# clresource monitor MySQL- resouce |