Go to main content

Oracle® Solaris Cluster Data Service for MySQL Cluster Guide

Exit Print View

Updated: September 2015
 
 

Planning the HA for MySQL Cluster Installation and Configuration

This section contains the following information that you need to plan your HA for MySQL Cluster installation and configuration.

MySQL Cluster and Oracle Solaris Containers

Oracle Solaris Cluster HA for MySQL Cluster is supported in the following forms.

  • Global zones.

  • Zone clusters – A zone cluster is almost a complete virtual cluster. It offers complete isolation between different zone clusters, so a user in zone cluster 1 cannot see anything in zone cluster 2. However, the administrator of the global cluster has access to both zone clusters.

Oracle Solaris Cluster HA for MySQL Cluster Components

HA for MySQL Cluster is a combination of the following components.

Table 2  HA for MySQL Cluster Components
Component Name
Description
ndb management server
MySQL Cluster requires a daemon called the ndb management server to start, stop, and configure a MySQL Cluster cluster. The presence of the management server is required for probing the ndbd daemon as well.
ndb daemon
The ndb daemon implements the MySQL Cluster storage engine called ndbengine.
ndbd shutdown controller
The ndbd shutdown controller brings the MySQL Cluster to a state that enables the ndbd daemons to be shut down in any order.
MySQL Cluster Server
A normal MySQL Cluster server which provides the SQL interface for the MySQL Cluster Cluster tables.

Configuration Restrictions

This section describes configuration restrictions that apply only to HA for MySQL Cluster.


Caution

Caution  -  Your data service configuration might not be supported if you do not observe these restrictions.


  • Location for the data directories – Each instance of the management server or the ndb daemon must have its own data directory. The ndb daemon instances of one MySQL Cluster Cluster located on the same node can share the same data directory with the management server. The data directory cannot be a global file system shared by all management server or ndb daemon instances of the MySQL Cluster across the nodes.

  • Communication between the ndbd daemons – The MySQL Cluster must be configured so that the ndbd daemons communicate over the clprivnet interfaces of Oracle Solaris Cluster software. Provide IP aliases for the clprivnet addresses in the /etc/inet/hosts file and configure the ndb nodes with these aliases in the MySQL Clusters configuration file config.ini. In a non-global zone configuration, you must create the clprivnet addresses for the non-global zones.

  • MySQL Cluster arbitration – MySQL Cluster arbitration must be disabled when MySQL Cluster is configured on Oracle Solaris Cluster nodes. Set the following parameters in the MySQL Cluster config.ini file:

    Arbitration=WaitExternal 
    ArbitrationTimeout=2-times-heartbeat-timeout

    The heartbeat-timeout parameter will be displayed when executing the following command:

    # cluster show
  • MySQL Cluster version – The minimum MySQL Cluster version is 7.0.7. Older versions do not support the disabling of MySQL Cluster arbitration.

Configuration Requirements

  • Resource group topology – If you create more than one ndb daemon resource for the same cluster, you must place all ndb daemon resources in the same resource group, and the ndb shutdown controller must depend on all of them.

  • Non-global zones – In the underlying non-global zones of zone clusters, you must provide addresses on the private interconnect. Your address range for the private interconnect must have ample spare addresses.

Dependencies Between HA for MySQL Cluster Components

The dependencies between the HA for MySQL Cluster components are described in the following table.

Table 3  Dependencies Between HA for MySQL Cluster Cluster Components
Component
Dependency
MySQL Cluster management server resource in a global cluster or zone cluster
SUNW.SharedAddress is required only if the MySQL Cluster Cluster management server should be load balanced in a scalable configuration.
MySQL Cluster ndbd daemon resource in a global cluster or zone cluster
MySQL Cluster management server resource is required.
MySQL Cluster shutdown controller resource in a global cluster or zone cluster
MySQL Cluster ndbd daemon resource is required.
MySQL Cluster server resource in a global cluster or zone cluster
  1. MySQL Cluster shutdown controller resource is required.

  2. SUNW.SharedAddress is required only if the MySQL Cluster server should be load balanced in a scalable configuration.

For any other possible dependency in a MySQL Cluster Server resource, such as SUNW.HAStoragePlus, a failover container resource, or SUNW.LogicalHostname, see, the MySQL Cluster documentation for more details.

You set these dependencies when you register and configure HA for MySQL Cluster. For more information, see Registering and Configuring HA for MySQL Cluster.

If more elaborate dependencies are required, see the r_properties (5) and rg_properties (5) man pages for further dependencies and affinities settings.

Configuration Guidelines

  • Communication path for all MySQL Cluster resources – Use the IP aliases for the clprivnet addresses as host names for the ndb management server and the MySQL Cluster server together with the ndbd daemon. This practice ensures that complete communication between the MySQL Cluster processes is restricted to the private interconnect.

  • Resource group topology – Create separate resource groups for the management server resource, the ndb daemon including the ndbd shutdown controller, and the MySQL Cluster server. This setup greatly decouples administrative restart actions of the management server, the ndb daemons, and the MySQL Cluster server. You can take the ndbd resource group offline if you want to shut down your ndb storage engine.

  • Shutdown and restart procedures – The ndb daemons are grouped in node groups whose members replicate data among each other. All the configured node groups must have at least one member. The data of a MySQL Cluster cluster with an empty node group is incomplete and can become inconsistent. To avoid such data inconsistency, all the data nodes (ndb daemons) panic if a node group becomes empty. To prevent this behavior, restart the data nodes without loading data by using the shutdown controller's stop algorithm. After this restart, you can perform an unordered shutdown of the ndb daemons. Note the following statements:

    • You cannot perform a normal shutdown of the ndb daemons one by one. Therefore, restart the ndb daemons without loading data before you perform a shutdown one by one.

    • Upon a stop of the shutdown controller, the data of the MySQL Cluster Cluster is unavailable unless the stop action of the shutdown controller is suspended.

    • If the shutdown controller and the ndb daemons are in one resource group, the easiest way to shutdown is to take this resource group offline. Disabling all the data nodes on their own without disabling of the shutdown controller leads to an abnormal shutdown of half of the nodes.

    • A rolling restart of the data nodes is possible by either disabling and re-enabling the data nodes one by one, or just shutting down a data node with MySQL Cluster methods. In this case, Oracle Solaris Cluster software detects the absence of this process tree and restarts it. You then have to tolerate the error messages of the vanished process tree.