MySQL Shell 8.0

7.4.4 Adding Instances to an InnoDB Cluster

You need a minimum of three instances in an InnoDB Cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an InnoDB Cluster.

From version 8.0.17, Group Replication implements compatibility policies which consider the patch version of the instances, and the Cluster.addInstance() operation detects this and in the event of an incompatibility the operation terminates with an error. See Checking the MySQL Version on Instances and Combining Different Member Versions in a Group.

If the instance already contains data, use the cluster.checkInstanceState() function first to verify the existing data does not prevent the instance from joining a cluster. See Checking Instance State.

Use the Cluster.addInstance(instance) function to add an instance to the cluster, where instance is connection information to a configured instance, see Section 7.4.2, “Configuring Production Instances for InnoDB Cluster Usage”. For example:

mysql-js> cluster.addInstance('icadmin@ic-2:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'icadmin@ic-2:3306': ********
Adding instance to the cluster ...
Validating instance at ic-2:3306...
This instance reports its own address as ic-2
Instance configuration is suitable.
The instance 'icadmin@ic-2:3306' was successfully added to the cluster.

The options dictionary of the addInstance(instance[, options]) function provides the following attributes:

When a new instance is added to the cluster, the local address for this instance is automatically added to the group_replication_group_seeds variable on all online cluster instances in order to allow them to use the new instance to rejoin the group, if needed.

Note

The instances listed in group_replication_group_seeds are used according to the order in which they appear in the list. This ensures user specified settings are used first and preferred. See Section 7.5.2, “Customizing InnoDB Cluster Member Servers” for more information.

If you are using MySQL 8.0.17 or later you can choose how the instance recovers the transactions it requires to synchronize with the cluster. Only when the joining instance has recovered all of the transactions previously processed by the cluster can it then join as an online instance and begin processing transactions. For more information, see Section 7.4.6, “Using MySQL Clone with InnoDB Cluster”.

Also in 8.0.17 and later, you can configure how Cluster.addInstance() behaves, letting recovery operations proceed in the background or monitoring different levels of progress in MySQL Shell.

Depending on which option you choose to recover the instance from the cluster, you see different output in MySQL Shell. Suppose that you are adding the instance ic-2 to the cluster, and ic-1 is the seed or donor.

To cancel the monitoring of the recovery phase, issue CONTROL+C. This stops the monitoring but the recovery process continues in the background. The waitRecovery integer option can be used with the Cluster.addInstance() operation to control the behavior of the command regarding the recovery phase. The following values are accepted:

By default, if the standard output which MySQL Shell is running on refers to a terminal, the waitRecovery option defaults to 3. Otherwise, it defaults to 2. See Monitoring Recovery Operations.

To verify the instance has been added, use the cluster instance's status() function. For example this is the status output of a sandbox cluster after adding a second instance:

mysql-js> cluster.status()
{
    "clusterName": "testCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "ic-1:3306",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "ic-1:3306": {
                "address": "ic-1:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "ic-2:3306": {
                "address": "ic-2:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    },
    "groupInformationSourceMember": "mysql://icadmin@ic-1:3306"
}

How you proceed depends on whether the instance is local or remote to the instance MySQL Shell is running on, and whether the instance supports persisting configuration changes automatically, see Section 6.2.4, “Persisting Settings”. If the instance supports persisting configuration changes automatically, you do not need to persist the settings manually and can either add more instances or continue to the next step. If the instance does not support persisting configuration changes automatically, you have to configure the instance locally. See Configuring Instances with dba.configureLocalInstance(). This is essential to ensure that instances rejoin the cluster in the event of leaving the cluster.

Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Instance Configuration in Super Read-only Mode for more information.

Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 6.10, “Using MySQL Router with AdminAPI, InnoDB Cluster, and InnoDB ReplicaSet”.

Checking Instance State

The cluster.checkInstanceState() function can be used to verify the existing data on an instance does not prevent it from joining a cluster. This process works by validating the instance's global transaction identifier (GTID) state compared to the GTIDs already processed by the cluster. For more information on GTIDs see GTID Format and Storage. This check enables you to determine if an instance which has processed transactions can be added to the cluster.

The following demonstrates issuing this in a running MySQL Shell:

mysql-js> cluster.checkInstanceState('icadmin@ic-4:3306')

The output of this function can be one of the following:

  • OK new: the instance has not executed any GTID transactions, therefore it cannot conflict with the GTIDs executed by the cluster

  • OK recoverable: the instance has executed GTIDs which do not conflict with the executed GTIDs of the cluster seed instances

  • ERROR diverged: the instance has executed GTIDs which diverge with the executed GTIDs of the cluster seed instances

  • ERROR lost_transactions: the instance has more executed GTIDs than the executed GTIDs of the cluster seed instances

Instances with an OK status can be added to the cluster because any data on the instance is consistent with the cluster. In other words the instance being checked has not executed any transactions which conflict with the GTIDs executed by the cluster, and can be recovered to the same state as the rest of the cluster instances.