Go to main content

Oracle® Solaris Cluster Data Service for MySQL Cluster Guide

Exit Print View

Updated: September 2015
 
 

How to Install MySQL Cluster Software

  1. Create a group and user on both nodes.
    phys-schost-1:/ # groupadd -g 200 dba
    phys-schost-2:/ # groupadd -g 200 dba
    
    phys-schost-1:/ # useradd -u 1000 -g dba -d /export/mysql -s /usr/bin/bash mysql 
    phys-schost-2:/ # useradd -u 1000 -g dba -d /export/mysql -s /usr/bin/bash mysql
  2. Install the tar file on both nodes.
    phys-schost-1:/ # mkdir /usr/local
    phys-schost-2:/ # mkdir /usr/local
    
    phys-schost-1:/ # cd /usr/local
    phys-schost-2:/ # cd /usr/local
    
    phys-schost-1:/usr/local # /usr/sfw/bin/gtar xzf /temp/mysql-7.0.7-solaris10-sparc.tar.gz
    phys-schost-2:/usr/local # /usr/sfw/bin/gtar xzf /temp/mysql-7.0.7-solaris10-sparc.tar.gz
    
    phys-schost-1:/usr/local # ln -s ./mysql-7.0.7-solaris10-sparc ./mysql
    phys-schost-2:/usr/local # ln -s ./mysql-7.0.7-solaris10-sparc ./mysql
  3. Set the ownership.
    phys-schost-1:/usr/local # chown -RL mysql:dba ./mysql
    phys-schost-2:/usr/local # chown -RL mysql:dba ./mysql

Setting up the MySQL Cluster Control

This section contains the following example procedures:

How to Configure the Management Server on Both Nodes

  1. Create the configuration.
    1. On both nodes, create the data directory for the management server.
      phys-schost-1:/ # mkdir /mgm-data
      phys-schost-2:/ # mkdir /mgm-data
    2. On both nodes, copy the config.ini file from /temp/cluconfig to the mgm-data directory.
      phys-schost-1:/ # cp /temp/cluconfig/config.ini /mgm-data
      phys-schost-2:/ # cp /temp/cluconfig/config.ini /mgm-data
    3. Modify the config.ini file from /temp/cluconfig.

      Alternatively, copy the content from config.ini File for Both Nodes to Store in /mgm-data and overwrite the copied file.

      The configuration in the config.ini file for this example is shown in the following table:

      Server ID
      Node Type
      Global-Cluster Node to Run On
      Private Net Alias
      1
      Management node
      phys-schost-1
      2
      Management node
      phys-schost-2
      3
      Date node
      phys-schost-1
      phys-schost-1-p
      4
      Date node
      phys-schost-2
      phys-schost-2-p
      7
      SQL node
      phys-schost-1
      8
      SQL node
      phys-schost-2
    4. Configure the data nodes to communicate over the private interconnect clprivnet addresses.

      Create aliases in the /etc/hosts table for the clprivnet addresses and use them in the config.ini file as the host names.

    5. Set Arbitration=WaitExternal and an appropriate value for ArbitrationTimeout in the config.inifile.
  2. On one node, set the heartbeat timeouts for Oracle Solaris Cluster software.
    phys-schost-1:/ # cluster set -p heartbeat_quantum=500 -p heartbeat_timeout=5000

    Note -  The heartbeat timeout must be half of the ArbitrationTimeout in the config.inifile.
  3. Start the management server.
    phys-schost-1:/ # cd /mgm-data
    phys-schost-2:/ # cd /mgm-data
    
    phys-schost-1:/mgm-data # /usr/local/mysql/bin/ndb_mgmd \
    --configdir=/mgm-data -f /mgm-data/config.ini \
    --ndb-nodeid=1
    
    phys-schost-2:/mgm-data # /usr/local/mysql/bin/ndb_mgmd \
    --configdir=/mgm-data -f /mgm-data/config.ini \
    --ndb-nodeid=2
  4. Verify that the management server is running.

    Run the ndb_mgm show command on both nodes until the data nodes are connected to the management server.

    phys-schost-1:/mgm-data # /usr/local/mysql/bin/ndb_mgm \
    --ndb-connectstring=phys-schost-1-p,phys-schost-2-p -e show
    
    phys-schost-2:/mgm-data # /usr/local/mysql/bin/ndb_mgm \
    --ndb-connectstring=phys-schost-2-p,phys-schost-1-p -e show

How to Configure the Data Nodes on Both Nodes

  1. Create the configuration.
    1. On both nodes create the data directory for the management server.
      phys-schost-1:/ # mkdir /ndbd-data
      phys-schost-2:/ # mkdir /ndbd-data
    2. Copy the my.cnf_ndbd file from /temp/cluconfig in the ndbd-data directory.
      phys-schost-1:/ # cp /temp/cluconfig/my.cnf_ndbd /ndbd-data/my.cnf
      phys-schost-2:/ # cp /temp/cluconfig/my.cnf_ndbd /ndbd-data/my.cnf
    3. Modify the my.cnf_ndbd file from /temp/cluconfig.

      Alternatively, copy the content from my.cnf File for the Data Nodes to Store in /ndbd-data and overwrite the copied file.

  2. Start the data nodes and verify the settings.
    phys-schost-1:/ # cd /ndbd-data
    phys-schost-2:/ # cd /ndbd-data
    
    phys-schost-1:/ndbd-data # /usr/local/mysql/bin/ndbd \
    --defaults-file=/ndbd-data/my.cnf \
    --ndb-connectstring=phys-schost-1-p:1186,phys-schost-2-p:1186 --ndb-nodeid=3
    
    phys-schost-2:/ndbd-data # /usr/local/mysql/bin/ndbd \
    --defaults-file=/ndbd-data/my.cnf \
    --ndb-connectstring=phys-schost-1-p:1186,phys-schost-2-p:1186 --ndb-nodeid=4
  3. On one node, check the data nodes.
    phys-schost-1:/ndbd-data # /usr/local/mysql/bin/ndb_mgm \
    --ndb-connectstring=phys-schost-1-p,phys-schost-2-p -e show

    Note -  Repeat the show command until both ndbd processes are fully up and running.

    Example output:

    phys-schost-1:/ # /usr/local/mysql/bin/ndb_mgm 
    --ndb-connectstring=phys-schost-1-p,phys-schost-2-p -e show
    Connected to Management Server at: phys-schost-1-p:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]   2 node(s)
    id=3  @172.16.4.2 (mysql-5.1.35 ndb-7.0.7, Nodegroup: 0, Master)
    id=4  @172.16.4.1 (mysql-5.1.35 ndb-7.0.7, Nodegroup: 0)
    
    [ndb_mgmd(MGM)] 2 node(s)
    id=1  @172.16.4.2 (mysql-5.1.35 ndb-7.0.7)
    id=2  @172.16.4.1 (mysql-5.1.35 ndb-7.0.7)
    
    [mysqld(API)]  2 node(s)
    id=7 (not connected, accepting connect from phys-schost-1)
    id=8 (not connected, accepting connect from phys-schost-2)

How to Initialize the MySQL Cluster Server

Create the directory structure and configuration by performing the following steps on both nodes

  1. Create the directory structure.
    phys-schost-1:/ # mkdir -p /mysql-data/logs /mysql-data/innodb
    phys-schost-2:/ # mkdir -p /mysql-data/logs /mysql-data/innodb
  2. Initialize the MySQL Cluster database.
    phys-schost-1: # cd /usr/local/mysql 
    phys-schost-1:/usr/local/mysql # ./scripts/* --datadir=/mysql-data
    
    phys-schost-2: # cd /usr/local/mysql 
    phys-schost-2:/usr/local/mysql # ./scripts/* --datadir=/mysql-data
  3. Prepare the my.cnf-serv file in /temp/cluconfig.

    Use the example values in the my.cnf File for the First SQL Node phys-schost-1 to Store in /mysql-data for phys-schost-1, and my.cnf File for the Second SQL Node phys-schost-2 to Store in /mysql-data for phys-schost-2.

    phys-schost-1:/ # cp /temp/cluconfig/my.cnf-serv /mysql-data/my.cnf
    phys-schost-1:/ # chmod 644 /mysql-data/my.cnf
    
    phys-schost-2:/ # cp /temp/cluconfig/my.cnf-serv /mysql-data/my.cnf
    phys-schost-2:/ # chmod 644 /mysql-data/my.cnf
  4. Copy my.cnf file from /temp/cluconfig/my.cnf-serv, adjust the parameters as appropriate, and change the permissions.
  5. Change the ownership of the data directory.
    phys-schost-1:/ # chown -R mysql:dba /mysql-data
    phys-schost-2:/ # chown -R mysql:dba /mysql-data
  6. Start the MySQL Cluster server for the first time and specify the grants.
    1. Create a start script on both nodes with the appropriate values.
      phys-schost-1:/ # cat >/temp/cluconfig/first <<EOF
      /usr/local/mysql/bin/mysqld --defaults-file=/mysql-data/my.cnf \
      --basedir=/usr/local/mysql --datadir=/mysql-data \
      --pid-file=/mysql-data/mysqld.pid \
      --user=mysql >> /mysql-data/logs/phys-schost-1.log 2>&1 &
      EOF
      
      phys-schost-2:/ # cat >/temp/cluconfig/first <<EOF
      /usr/local/mysql/bin/mysqld --defaults-file=/mysql-data/my.cnf \
      --basedir=/usr/local/mysql --datadir=/mysql-data \
      --pid-file=/mysql-data/mysqld.pid \
      --user=mysql >> /mysql-data/logs/phys-schost-2.log 2>&1 &
      EOF
    2. Execute the start script /temp/cluconfig/first on both nodes.
      phys-schost-1:/ # . /temp/cluconfig/first
      phys-schost-2:/ # . /temp/cluconfig/first
  7. Wait 60 seconds and verify that the MySQL Cluster servers connect to the ndb data nodes.
    1. On one node, issue the following command.
      phys-schost-2:/ # /usr/local/mysql/bin/ndb_mgm \
      --ndb-connectstring=phys-schost-1,phys-schost-2 -e show
    2. Set the administrative password on both nodes.
      phys-schost-1:/ # /usr/local/mysql/bin/mysqladmin -S /tmp/phys-schost-1.sock \
      -uroot password 'root'
      
      phys-schost-2:/ # /usr/local/mysql/bin/mysqladmin -S /tmp/phys-schost-2.sock \
      -uroot password 'root'
    3. Define the administrative user.
      phys-schost-1:/ # /usr/local/mysql/bin/mysql -S /tmp/phys-schost-1.sock -uroot -proot
      mysql> use mysql;
      mysql> grant all on *.* to 'root'@'phys-schost-1' identified by 'root';
      mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='phys-schost-1';
      mysql> exit
      
      phys-schost-2:/ # /usr/local/mysql/bin/mysql -S /tmp/phys-schost-2.sock -uroot -proot
      mysql> use mysql;
      mysql> grant all on *.* to 'root'@'phys-schost-2' identified by 'root';
      mysql> UPDATE user SET Grant_priv='Y' WHERE User='root' AND Host='phys-schost-2';
      mysql> exit
  8. Prepare the MySQL Cluster server for Oracle Solaris Cluster usage.
    1. Prepare a mysql_config file under /temp/cluconfig.

      Use the content from mysql_config File for the First SQL Node phys-schost-1 to Store in /temp/cluconfig on phys-schost-1, and the content from mysql_config File for the Second SQL Node phys-schost-2 to Store in /temp/cluconfig on phys-schost-2.

    2. Set the MYSQL_NIC_HOSTNAME values.

      On phys-schost-1

      MYSQL_NIC_HOSTNAME=" phys-schost-1 "

      On phys-schost-2

      MYSQL_NIC_HOSTNAME=" phys-schost-2 "
    3. On both nodes, execute the following commands.
      phys-schost-1:/ # ksh /opt/SUNWscmys/util/mysql_register \
      -f /temp/cluconfig/mysql_config
      
      phys-schost-2:/ # ksh /opt/SUNWscmys/util/mysql_register \
      -f /temp/cluconfig/mysql_config
    4. Shut down the MySQL Cluster server on both nodes.
      phys-schost-1:/ # pkill -f mysqld
      phys-schost-2:/ # pkill -f mysqld
    5. From the global zone of one node, shut down the MySQL Cluster components.
      phys-schost-1:/ # /usr/local/mysql/bin/ndb_mgm \
      --ndb-connectstring=phys-schost-1-p,phys-schost-2-p -e shutdown
    6. Verify the shutdown on both nodes.
      phys-schost-1:/ # /usr/local/mysql/bin/ndb_mgm \
      --ndb-connectstring=phys-schost-1-p,phys-schost-2-p -e show
      phys-schost-2:/ # /usr/local/mysql/bin/ndb_mgm \
      --ndb-connectstring=phys-schost-2-p,phys-schost-1-p -e show
    7. Shut down potentially running daemons.
      phys-schost-1:/ # /usr/local/mysql/bin/ndb_mgm \
      --ndb-connectstring=phys-schost-1-p,phys-schost-2-p -e "id stop"