JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for MySQL Cluster Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for MySQL Cluster

A.  Files for Configuring HA for MySQL Cluster

B.  Deployment Example: Installing MySQL Cluster in the Global Zone

C.  Deployment Example: Installing MySQL Cluster in a Non-Global Zone

Target Cluster Configuration

Software Configuration

How to Install MySQL Cluster Software

Setting Up the MySQL Cluster Control

How to Configure the Management Server on Both Nodes

How to Configure the Data Nodes on Both Nodes

How to Initialize the MySQL Server

How to Create the HA for MySQL Cluster Configuration With Scalable Services

Example Configuration Files for Installation in a Non-Global Zone

config.ini File for Both Nodes to Store in /mgm-data

my.cnf File for the Data Nodes to Store in /ndbd-data

my.cnf File for the First SQL Node phys-schost-1 to Store in /mysql-data

my.cnf File for the Second SQL Node phys-schost-2 to Store in /mysql-data

mysql_config File for the First SQL Node phys-schost-1 to Store in /temp/cluconfig

mysql_config File for the Second SQL Node phys-schost-2 to Store in /temp/cluconfig

mysql_ndb_mgmd_config File for the First Node phys-schost-1

mysql_ndb_mgmd_config File for the Second Node phys-schost-2

mysql_ndbd_config File for the First Node phys-schost-2

mysql_ndbd_config File for the Second Node phys-schost-2

ndbd_shutdown_config File for One Node

ha_mysql_config File for One Node

Index

How to Create the HA for MySQL Cluster Configuration With Scalable Services

  1. On one node, create the resource groups in the global zone.
    phys-schost-1:/ # clresourcegroup create access-rg
    phys-schost-1:/ # clressharedaddress create -g access-rg \
    > -n phys-schost-1:zone1,phys-schost-2:zone2 sa_host_1
    phys-schost-1:/ # clresourcegroup online -eM access-rg
    phys-schost-1:/ # clresourcegroup create -p maximum_primaries=2 -p desired_primaries=2 \
    > -n phys-schost-1:zone1,phys-schost-2:zone2 mgm-rg
    phys-schost-1:/ # clresourcegroup create -p maximum_primaries=2 -p desired_primaries=2 \
    > -n phys-schost-1:zone1,phys-schost-2:zone2 ndbd-rg
    phys-schost-1:/ # clresourcegroup create -p maximum_primaries=2 -p desired_primaries=2 \
    > -n phys-schost-1:zone1,phys-schost-2:zone2 mysql-rg
    phys-schost-1:/ # clresourcegroup set -p rg_affinities=++ndbd-rg mysql-rg

    Note - Setting the ++ affinity ensures that on a restart of a single node, the start order of the resources is maintained as set within the resource dependencies.


  2. In the non-global zone on both nodes, create a configuration directory for the parameter file.
    phys-schost-1:/ # zlogin zone1 mkdir /cluster-pfiles 
    phys-schost-2:/ # zlogin zone2 mkdir /cluster-pfiles 
  3. On one node in the global zone, register the gds resource type.
    phys-schost-1:/ # clresourcetype register gds
  4. Create the resource for the management daemon.
    1. Create a configuration file on both nodes in the global and the non-global zones under /temp/cluconfig/mysql_ndb_mgmd_config.

      Use the content of mysql_ndb_mgmd_config File for the First Node phys-schost-1 for phys-schost-1 and mysql_ndb_mgmd_config File for the Second Node phys-schost-2 for phys-schost-2.

    2. Make sure that the ID parameter on each node reflects the ID in the config.ini file.

      ID=1 for zone1

      ID=2 for zone2

    3. Ensure that the connect string contains the global–cluster node name.
      • Value for zone1:

        CONNECT_STRING=zone1,zone2
      • Value for zone2:

        CONNECT_STRING=zone2,zone1
    4. Create the parameter file in the non-global zone on both nodes.
      zone1:/ # ksh /opt/SUNWscmys/ndb_mgmd/util/mysql_ndb_mgmd_register \
      > -f /temp/cluconfig/mysql_ndb_mgmd_config -p
      zone2:/ # ksh /opt/SUNWscmys/ndb_mgmd/util/mysql_ndb_mgmd_register \
      > -f /temp/cluconfig/mysql_ndb_mgmd_config -p

      Leave the non-global zone on both nodes. Create the resource on one node's global zone, start the mgm-rg resource and verify with MySQL Cluster methods.

      phys-schost-1:/ # ksh /opt/SUNWscmys/ndb_mgmd/util/mysql_ndb_mgmd_register \
      > -f /temp/cluconfig/mysql_ndb_mgmd_config 
      phys-schost-1:/ # clresourcegroup online -eM mgm-rg
      phys-schost-1:/ # /usr/local/mysql/bin/ndb_mgm \
      > --ndb-connectstring=phys-schost-1-p,phys-schost-2-p -e show
      phys-schost-1:/ # /usr/local/mysql/bin/ndb_mgm \
      > --ndb-connectstring=phys-schost-2-p,phys-schost-1-p -e show
  5. Create the resource for the ndbd daemon.
    1. Create a configuration file on both nodes in the global zone and in the non-global zone under /temp/cluconfig/mysql_ndbd_config.

      Use the content of mysql_ndbd_config File for the First Node phys-schost-2 for phys-schost-1 and mysql_ndbd_config File for the Second Node phys-schost-2 for phys-schost-2.

    2. Ensure that the ID parameter on each node reflects the ID in the config.ini file.

      ID=3 for zone1

      ID=4 for zone2

    3. Create the parameter file in the non-global zones on both nodes.
      zone1:/ # ksh /opt/SUNWscmys/ndbd/util/mysql_ndbd_register \
      > -f /temp/cluconfig/mysql_ndbd_config -p
      zone2:/ # ksh /opt/SUNWscmys/ndbd/util/mysql_ndbd_register \
      > -f /temp/cluconfig/mysql_ndbd_config -p

      Leave the non-global zone on both nodes.

    4. Create the resource on one node's global zone and start the ndbd-rg resource.
      phys-schost-1:/ # ksh /opt/SUNWscmys/ndbd/util/mysql_ndbd_register \
      > -f /temp/cluconfig/mysql_ndbd_config 
      phys-schost-1:/ # clresourcegroup online -eM ndbd-rg

    Note - Do not try to take the ndbd-rg resource offline until you create and enable the shutdown controller resource.


  6. Create the resource for the shutdown controller.
    1. On one node, create a configuration file in the global zone under /temp/cluconfig/ndbd_shutdown_config.

      Use the content of ndbd_shutdown_config File for One Node.

    2. On one node, create the resource and start the ndbd-rg resource.
      phys-schost-1:/ # ksh /opt/SUNWscmys/ndbd_shutdown/util/ndbd_shutdown_register \
      > -f /temp/cluconfig/ndbd_shutdown_config 
      phys-schost-1:/ # clresourcegroup online -e ndbd-rg

    Note - From this point, never take offline on all the servers only the ndbd resource. To shut down the ndbd completely, either use the clresourcegroup take offline ndbd-rg command or first disable the shutdown controller resource.

    To shut down an ndbd resource on one node only (performing a rolling restart), you can disable it with clresource disable -n phys-schost-1 ndbd-rs. In this case, you should re-enable the resource before you shut down another resource.

    For a rolling restart, do not disable the shutdown controller resource. Doing so would lead to a restart of the ndbd without loading data, in which case your database would be unavailable.


  7. Create the resource for the MySQL server.
    1. On one node, create a configuration file under /temp/cluconfig/ha_mysql_config.

      Use the content of ha_mysql_config File for One Node.

    2. On one node, create the resource and start the ndbd-rg resource.
      phys-schost-1:/ # ksh /opt/SUNWscmys/util/ha_mysql_register \
      > -f /temp/cluconfig/ha_mysql_config 
      phys-schost-1:/ # clresourcegroup online -eM mysql-rg