JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for MySQL Cluster Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for MySQL Cluster

A.  Files for Configuring HA for MySQL Cluster

B.  Deployment Example: Installing MySQL Cluster in the Global Zone

C.  Deployment Example: Installing MySQL Cluster in a Non-Global Zone

Target Cluster Configuration

Software Configuration

How to Install MySQL Cluster Software

Setting Up the MySQL Cluster Control

How to Configure the Management Server on Both Nodes

How to Configure the Data Nodes on Both Nodes

How to Initialize the MySQL Server

How to Create the HA for MySQL Cluster Configuration With Scalable Services

Example Configuration Files for Installation in a Non-Global Zone

config.ini File for Both Nodes to Store in /mgm-data

my.cnf File for the Data Nodes to Store in /ndbd-data

my.cnf File for the First SQL Node phys-schost-1 to Store in /mysql-data

my.cnf File for the Second SQL Node phys-schost-2 to Store in /mysql-data

mysql_config File for the First SQL Node phys-schost-1 to Store in /temp/cluconfig

mysql_config File for the Second SQL Node phys-schost-2 to Store in /temp/cluconfig

mysql_ndb_mgmd_config File for the First Node phys-schost-1

mysql_ndb_mgmd_config File for the Second Node phys-schost-2

mysql_ndbd_config File for the First Node phys-schost-2

mysql_ndbd_config File for the Second Node phys-schost-2

ndbd_shutdown_config File for One Node

ha_mysql_config File for One Node

Index

How to Configure the Management Server on Both Nodes

  1. In the global zone of one node, set the heartbeat timeouts for Oracle Solaris Cluster.
    phys-schost-1:/ # cluster set -p heartbeat_quantum=500 -p heartbeat_timeout=5000

    Note - The heartbeat timeout must be half of the ArbitrationTimeout in the config.ini


  2. Define the addresses for the private interconnect on the local zones.
    phys-schost-1:/ # scconf -a -P node=phys-schost-1:zone1,zprivatehostname=zone_1_p \
    > -P node=phys-schost-2:zone2,zprivatehostname=zone_2_p
  3. Create the configuration.
    1. On both zones create the data directory for the management server.
      phys-schost-1:/ # zlogin zone1
      phys-schost-1:/ # zlogin zone2
      zone2:/ # mkdir /mgm-data
      zone1:/ # mkdir /mgm-data
    2. Copy the config.ini file from /temp/cluconfig into the /mgm-data directory.
      zone1:/ # cp /temp/cluconfig/config.ini /mgm-data
      zone2:/ # cp /temp/cluconfig/config.ini /mgm-data
    3. Modify the config.ini file from the /temp/cluconfig directory.

      Alternatively, copy the content from the config.ini File for Both Nodes to Store in /mgm-data and overwrite the copied file.

      The configuration in the config.ini stored in the appendix is as follows.


      Server ID
      Node Type
      Node to Run On
      Private Network Alias
      1
      Management node
      phys-schost-1:zone1
      2
      Management node
      phys-schost-2:zone2
      3
      Data node
      phys-schost-1:zone1
      phys-schost-1-p
      4
      Data node
      phys-schost-2:zone2
      phys-schost-2-p
      7
      Sql node
      phys-schost-1:zone1
      8
      Sql node
      phys-schost-2:zone2
    4. Configure the data nodes to communicate over the private interconnect clprivnet addresses.

      Create aliases in the /etc/inet/hosts table for the clprivnet addresses and use them in the config.ini as the host names.

    5. Set Arbitration=WaitExternal and an appropriate value for ArbitrationTimeout in the config.ini.
  4. Start the management server.

    Perform the following commands on the target zone.

    zone1:/ # cd /mgm-data
    zone2:/ # cd /mgm-data
    zone1:/mgm-data # /usr/local/mysql/bin/ndb_mgmd --configdir=/mgm-data \
    > -f /mgm-data/config.ini --ndb-nodeid=1
    zone2:/mgm-data # /usr/local/mysql/bin/ndb_mgmd \--configdir=/mgm-data \
    > -f /mgm-data/config.ini --ndb-nodeid=2
  5. Verify that the management server is running.

    Run the ndb_mgm showcommand on both nodes until the data nodes are connected to the management server.

    zone1:/mgm-data # /usr/local/mysql/bin/ndb_mgm \ 
    > --ndb-connectstring=zone_1_p,phys-schost-2-p -e show
    zone2:/mgm-data # /usr/local/mysql/bin/ndb_mgm \
    > --ndb-connectstring=zone_2_p,phys-schost-1-p -e show