MySQL Shell 8.0 (part of MySQL 8.0)
This section describes the AdminAPI operations which apply to instances. You can configure instances before using them with InnoDB Cluster, check the state of an instance, and so on.
Before creating a production deployment from server instances
you need to check that MySQL on each instance is correctly
configured. In addition to
dba.configureInstance()
, which checks the
configuration as part of configuring an instance, you can use
the
dba.checkInstanceConfiguration(
function. This ensures that the
instance
)instance
satisfies the
Section 6.2.1, “MySQL InnoDB Cluster Requirements” without
changing any configuration on the instance. This does not
check any data that is on the instance, see
Checking Instance State for more information.
The user which you use to connect to the
instance
must have suitable
privileges, for example as configured at
Configuring Users for AdminAPI. The following
demonstrates issuing this in a running MySQL Shell:
mysql-js> dba.checkInstanceConfiguration('icadmin@ic-1:3306')
Please provide the password for 'icadmin@ic-1:3306': ***
Validating MySQL instance at ic-1:3306 for use in an InnoDB cluster...
This instance reports its own address as ic-1
Clients and other cluster members will communicate with it through this address by default.
If this is not correct, the report_host MySQL system variable should be changed.
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
Checking instance configuration...
Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+--------------------------+---------------+----------------+--------------------------------------------------+
| enforce_gtid_consistency | OFF | ON | Update read-only variable and restart the server |
| gtid_mode | OFF | ON | Update read-only variable and restart the server |
| server_id | 1 | | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
Please use the dba.configureInstance() command to repair these issues.
{
"config_errors": [
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "1",
"option": "server_id",
"required": ""
}
],
"status": "error"
}
Repeat this process for each server instance that you plan to
use as part of your cluster. The report generated after
running dba.checkInstanceConfiguration()
provides information about any configuration changes required
before you can proceed. The action
field in
the config_error
section of the report
tells you whether MySQL on the instance requires a restart to
detect any change made to the configuration file.
Instances which do not support persisting configuration
changes automatically (see
Persisting Settings) require you
to connect to the server, run MySQL Shell, connect to the
instance locally and issue
dba.configureLocalInstance()
. This enables
MySQL Shell to modify the instance's option file after
running the following commands against a remote instance:
dba.configureInstance()
dba.createCluster()
Cluster
.addInstance()
Cluster
.removeInstance()
Cluster
.rejoinInstance()
Failing to persist configuration changes to an instance's option file can result in the instance not rejoining the cluster after the next restart.
The recommended method is to log in to the remote machine, for
example using SSH, run MySQL Shell as the root user and then
connect to the local MySQL server. For example, use the
--uri
option to connect to the
local instance
:
shell> sudo -i mysqlsh --uri=instance
Alternatively use the \connect
command to
log in to the local instance. Then issue
dba.configureInstance(
,
where instance
)instance
is the connection
information to the local instance, to persist any changes made
to the local instance's option file.
mysql-js> dba.configureLocalInstance('icadmin@ic-2:3306')
Repeat this process for each instance in the cluster which does not support persisting configuration changes automatically. For example if you add 2 instances to a cluster which do not support persisting configuration changes automatically, you must connect to each server and persist the configuration changes required for InnoDB Cluster before the instance restarts. Similarly if you modify the cluster structure, for example changing the number of instances, you need to repeat this process for each server instance to update the InnoDB Cluster metadata accordingly for each instance in the cluster.
The cluster.checkInstanceState()
function
can be used to verify the existing data on an instance does
not prevent it from joining a cluster. This process works by
validating the instance's global transaction identifier (GTID)
state compared to the GTIDs already processed by the cluster.
For more information on GTIDs see
GTID Format and Storage. This check
enables you to determine if an instance which has processed
transactions can be added to the cluster.
The following demonstrates issuing this in a running MySQL Shell:
mysql-js> cluster.checkInstanceState('icadmin@ic-4:3306')
The output of this function can be one of the following:
OK new: the instance has not executed any GTID transactions, therefore it cannot conflict with the GTIDs executed by the cluster
OK recoverable: the instance has executed GTIDs which do not conflict with the executed GTIDs of the cluster seed instances
ERROR diverged: the instance has executed GTIDs which diverge with the executed GTIDs of the cluster seed instances
ERROR lost_transactions: the instance has more executed GTIDs than the executed GTIDs of the cluster seed instances
Instances with an OK status can be added to the cluster because any data on the instance is consistent with the cluster. In other words the instance being checked has not executed any transactions which conflict with the GTIDs executed by the cluster, and can be recovered to the same state as the rest of the cluster instances.