MySQL Shell 8.0 (part of MySQL 8.0)
The following sections describe how to deploy a new production InnoDB Cluster.
AdminAPI provides the
dba.configureInstance()
function that
checks if an instance is suitably configured for
InnoDB Cluster usage, and configures the instance if it
finds any settings which are not compatible with
InnoDB Cluster. You run the
dba.configureInstance()
command against
an instance and it checks all of the settings required to
enable the instance to be used for InnoDB Cluster usage.
If the instance does not require configuration changes,
there is no need to modify the configuration of the
instance, and the dba.configureInstance()
command output confirms that the instance is ready for
InnoDB Cluster usage. If any changes are required to make
the instance compatible with InnoDB Cluster, a report of
the incompatible settings is displayed, and you can choose
to let the command make the changes to the instance's option
file. Depending on the way MySQL Shell is connected to the
instance, and the version of MySQL running on the instance,
you can make these changes permanent by persisting them to a
remote instance's option file, see
Persisting Settings. Instances
which do not support persisting configuration changes
automatically require that you configure the instance
locally, see Configuring Instances with
dba.configureLocalInstance()
.
Alternatively you can make the changes to the
instance's option file manually, see
Using Option Files for more information.
Regardless of the way you make the configuration changes,
you might have to restart MySQL to ensure the configuration
changes are detected.
The syntax of the dba.configureInstance()
command is:
dba.configureInstance([instance
][,options
])
where instance
is an instance
definition, and options
is a data
dictionary with additional options to configure the
operation. The command returns a descriptive text message
about the operation's result.
The instance
definition is the
connection data for the instance, see
Connecting to the Server Using URI-Like Strings or Key-Value Pairs.
If the target instance already belongs to an
InnoDB Cluster an error is generated and the process
fails.
The options dictionary can contain the following:
mycnfPath
- the path to the MySQL
option file of the instance.
outputMycnfPath
- alternative output
path to write the MySQL option file of the instance.
password
- the password to be used by
the connection.
clusterAdmin
- the name of an
InnoDB Cluster administrator user to be created. The
supported format is the standard MySQL account name
format. Supports identifiers or strings for the user
name and host name. By default if unquoted it assumes
input is a string. See
Creating User Accounts for Administration.
clusterAdminPassword
- the password
for the InnoDB Cluster administrator account being
created using clusterAdmin
. Although
you can specify using this option, this is a potential
security risk. If you do not specify this option, but do
specify the clusterAdmin
option, you
are prompted for the password at the interactive prompt.
deprecated, and scheduled for removal in a future version
clearReadOnly
- a boolean value used
to confirm that
super_read_only
should
be set to off, see
Super Read-only and Instances.
interactive
- a boolean value used to
disable the interactive wizards in the command
execution, so that prompts are not provided to the user
and confirmation prompts are not shown.
restart
- a boolean value used to
indicate that a remote restart of the target instance
should be performed to finalize the operation.
Although the connection password can be contained in the instance definition, this is insecure and not recommended. Use the MySQL Shell Section 4.4, “Pluggable Password Store” to store instace passwords securely.
Once dba.configureInstance()
is issued
against an instance, the command checks if the instance's
settings are suitable for InnoDB Cluster usage. A report
is displayed which shows the settings required by
InnoDB Cluster
. If the instance does not require any changes to its
settings you can use it in an InnoDB Cluster, and can
proceed to Creating the Cluster. If the
instance's settings are not valid for InnoDB Cluster usage
the dba.configureInstance()
command
displays the settings which require modification. Before
configuring the instance you are prompted to confirm the
changes shown in a table with the following information:
Variable
- the invalid configuration
variable.
Current Value
- the current value for
the invalid configuration variable.
Required Value
- the required value
for the configuration variable.
How you proceed depends on whether the instance supports
persisting settings, see
Persisting Settings. When
dba.configureInstance()
is issued against
the MySQL instance which MySQL Shell is currently running
on, in other words the local instance, it attempts to
automatically configure the instance. When
dba.configureInstance()
is issued against
a remote instance, if the instance supports persisting
configuration changes automatically, you can choose to do
this.
If a remote instance does not support persisting the changes
to configure it for InnoDB Cluster usage, you have to
configure the instance locally. See
Configuring Instances with
dba.configureLocalInstance()
.
In general, a restart of the instance is not required after
dba.configureInstance()
configures the
option file, but for some specific settings a restart might
be required. This information is shown in the report
generated after issuing
dba.configureInstance()
. If the instance
supports the RESTART
statement, MySQL Shell can shutdown and then start the
instance. This ensures that the changes made to the
instance's option file are detected by mysqld. For more
information see RESTART
.
After executing a RESTART
statement, the current connection to the instance is lost.
If auto-reconnect is enabled, the connection is
reestablished after the server restarts. Otherwise, the
connection must be reestablished manually.
The dba.configureInstance()
method
verifies that a suitable user is available for cluster
usage, which is used for connections between members of the
cluster, see
Creating User Accounts for Administration.
If you do not specify a user to administer the cluster, in interactive mode a wizard enables you to choose one of the following options:
enable remote connections for the root user, not recommended in a production environment
create a new user
no automatic configuration, in which case you need to manually create the user
If the instance has
super_read_only=ON
then
you might need to confirm that AdminAPI can set
super_read_only=OFF
. See
Super Read-only and Instances for more
information.
Once you have prepared your instances, use the
dba.createCluster()
function to create
the cluster, using the instance which MySQL Shell is
connected to as the seed instance for the cluster. The seed
instance is replicated to the other instances that you add
to the cluster, making them replicas of the seed instance.
In this procedure the ic-1 instance is used as the seed.
When you issue
dba.createCluster(
MySQL Shell creates a classic MySQL protocol session to the
server instance connected to the MySQL Shell's current
global session. For example, to create a cluster called
name
)testCluster
and assign the
returned cluster to a variable called
cluster
:
mysql-js> var cluster = dba.createCluster('testCluster
')
Validating instance at icadmin@ic-1:3306...
This instance reports its own address as ic-1
Instance configuration is suitable.
Creating InnoDB cluster 'testCluster' on 'icadmin@ic-1:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
This pattern of assigning the returned cluster to a variable enables you to then execute further operations against the cluster using the Cluster object's methods. The returned Cluster object uses a new session, independent from the MySQL Shell's global session. This ensures that if you change the MySQL Shell global session, the Cluster object maintains its session to the instance.
To be able to administer a cluster, you must ensure that you
have a suitable user which has the required privileges. The
recommended approach is to create an administration user. If
you did not create an administration user when configuring
your instances, use the
operation. For example to create a user named
Cluster
.setupAdminAccount()icadmin
that can administer the
InnoDB Cluster assigned to the variable
cluster
, issue:
mysql-js> cluster.setupAdminAccount(icadmin
)
See Configuring Users for AdminAPI for more information on cluster administration users.
The dba.createCluster()
operation
supports MySQL Shell's interactive
option. When interactive
is on, prompts
appear in the following situations:
when run on an instance that belongs to a cluster and
the adoptFromGr
option is false, you
are asked if you want to adopt an existing cluster
when the force
option is not used
(not set to true
), you are asked to
confirm the creation of a multi-primary cluster
If you encounter an error related to metadata being inaccessible you might have the loopback network interface configured. For correct InnoDB Cluster usage disable the loopback interface.
To check the cluster has been created, use the cluster
instance's status()
function. See
Checking a cluster's Status with
.
Cluster
.status()
Once server instances belong to a cluster it is important
to only administer them using MySQL Shell and AdminAPI.
Attempting to manually change the configuration of Group
Replication on an instance once it has been added to a
cluster is not supported. Similarly, modifying server
variables critical to InnoDB Cluster, such as
server_uuid
, after an
instance is configured using AdminAPI is not supported.
When you create a cluster using MySQL Shell 8.0.14 and
later, you can set the timeout before instances are expelled
from the cluster, for example when they become unreachable.
Pass the expelTimeout
option to the
dba.createCluster()
operation, which
configures the
group_replication_member_expel_timeout
variable on the seed instance. The
expelTimeout
option can take an integer
value in the range of 0 to 3600. All instances running MySQL
server 8.0.13 and later which are added to a cluster with
expelTimeout
configured are automatically
configured to have the same expelTimeout
value as configured on the seed instance.
For information on the other options which you can pass to
dba.createCluster()
, see
Section 6.2.5, “Working with InnoDB Cluster”.
Use the
function to add more instances to the cluster, where
Cluster
.addInstance(instance
)instance
is connection
information to a configured instance, see
Configuring Production Instances. From
version 8.0.17, Group Replication implements compatibility
policies which consider the patch version of the instances,
and the
operation detects this and in the event of an
incompatibility the operation terminates with an error. See
Checking the MySQL Version on Instances and
Combining Different Member Versions in a Group
Cluster
.addInstance()
You need a minimum of three instances in the cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an instance. To add an instance to the cluster issue:
mysql-js> cluster.addInstance('icadmin@ic-2:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'icadmin@ic-2:3306': ********
Adding instance to the cluster ...
Validating instance at ic-2:3306...
This instance reports its own address as ic-2
Instance configuration is suitable.
The instance 'icadmin@ic-2:3306' was successfully added to the cluster.
When a new instance is added to the cluster, the local
address for this instance is automatically added to the
group_replication_group_seeds
variable on all online cluster instances in order to allow
them to use the new instance to rejoin the group, if needed.
The instances listed in
group_replication_group_seeds
are used according to the order in which they appear in
the list. This ensures user specified settings are used
first and preferred. See
Customizing InnoDB Clusters for more
information.
If you are using MySQL 8.0.17 or later you can choose how the instance recovers the transactions it requires to synchronize with the cluster. Only when the joining instance has recovered all of the transactions previously processed by the cluster can it then join as an online instance and begin processing transactions. For more information, see Section 6.2.2.2, “Using MySQL Clone with InnoDB Cluster”.
Also in 8.0.17 and later, you can configure how
behaves, letting recovery operations proceed in the
background or monitoring different levels of progress in
MySQL Shell.
Cluster
.addInstance()
Depending on which option you chose to recover the instance from the cluster, you see different output in MySQL Shell. Suppose that you are adding the instance ic-2 to the cluster, and ic-1 is the seed or donor.
When you use MySQL Clone to recover an instance from the cluster, the output looks like:
Validating instance at ic-2:3306... This instance reports its own address as ic-2:3306 Instance configuration is suitable. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: ic-2:3306 is being cloned from ic-1:3306 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: ic-2:3306 is shutting down... * Waiting for server restart... ready * ic-2:3306 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 2.18 GB transferred in 7 sec (311.26 MB/s) State recovery already finished for 'ic-2:3306' The instance 'ic-2:3306' was successfully added to the cluster.
The warnings about server restart should be observed, you might have to manually restart an instance. See RESTART Statement.
When you use incremental recovery to recover an instance from the cluster, the output looks like:
Incremental distributed state recovery is now in progress. * Waiting for incremental recovery to finish... NOTE: 'ic-2:3306' is being recovered from 'ic-1:3306' * Distributed recovery has finished
To cancel the monitoring of the recovery phase, issue
CONTROL+C. This stops the monitoring but
the recovery process continues in the background. The
waitRecovery
integer option can be used
with the
operation to control the behavior of the command regarding
the recovery phase. The following values are accepted:
Cluster
.addInstance()
0: do not wait and let the recovery process finish in the background;
1: wait for the recovery process to finish;
2: wait for the recovery process to finish; and show detailed static progress information;
3: wait for the recovery process to finish; and show detailed dynamic progress information (progress bars);
By default, if the standard output which MySQL Shell is
running on refers to a terminal, the
waitRecovery
option defaults to 3.
Otherwise, it defaults to 2. See
Monitoring Recovery Operations.
To verify the instance has been added, use the cluster
instance's status()
function. For
example this is the status output of a sandbox cluster after
adding a second instance:
mysql-js> cluster.status()
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "ic-1:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"ic-1:3306": {
"address": "ic-1:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"ic-2:3306": {
"address": "ic-2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://icadmin@ic-1:3306"
}
How you proceed depends on whether the instance is local or
remote to the instance MySQL Shell is running on, and
whether the instance supports persisting configuration
changes automatically, see
Persisting Settings. If the
instance supports persisting configuration changes
automatically, you do not need to persist the settings
manually and can either add more instances or continue to
the next step. If the instance does not support persisting
configuration changes automatically, you have to configure
the instance locally. See
Configuring Instances with
dba.configureLocalInstance()
. This is
essential to ensure that instances rejoin the cluster in the
event of leaving the cluster.
If the instance has
super_read_only=ON
then
you might need to confirm that AdminAPI can set
super_read_only=OFF
. See
Super Read-only and Instances for more
information.
Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 6.4, “MySQL Router”.
As part of using Group Replication, InnoDB Cluster creates
internal recovery users which enable connections between the
servers in the cluster. These users are internal to the
cluster, and the user name of the generated users follows a
naming scheme of
mysql_innodb_cluster_
,
where server_id
@%server_id
is unique
to the instance. In versions earlier than 8.0.17 the user
name of the generated users followed a naming scheme of
mysql_innodb_cluster_r[
.
The hostname used for the internal users depends on whether
the 10_numbers
]ipAllowlist
option has been
configured. If ipAllowlist
is not
configured, it defaults to AUTOMATIC
and
the internal users are created using both the wildcard
%
character and
localhost
for the hostname value. When
ipAllowlist
has been configured, for each
address in the ipAllowlist
list an
internal user is created.
For more information, see
Creating an Allowlist of Servers.
Each internal user has a randomly generated password. From version 8.0.18, AdminAPI enables you to change the generated password for internal users. See Resetting Recovery Account Passwords. The randomly generated users are given the following grants:
GRANT REPLICATION SLAVE ON *.* to internal_user
;
The internal user accounts are created on the seed instance and then replicated to the other instances in the cluster. The internal users are:
generated when creating a new cluster by issuing
dba.createCluster()
generated when adding a new instance to the cluster by
issuing
.
Cluster
.addInstance()
In addition, the
operation can also result in a new internal user being
generated when the Cluster
.rejoinInstance()ipAllowlist
option is
used to specify a hostname. For example by issuing:
Cluster.rejoinInstance({ipAllowlist: "192.168.1.1/22"});
all previously existing internal users are removed and a new
internal user is created, taking into account the
ipAllowlist
value used.
For more information on the internal users required by Group Replication, see User Credentials For Distributed Recovery.
Instances that belong to a cluster use different ports for
different types of communication. In addition to the default
port
at 3306, which is used
for client connections over classic MySQL protocol, and the
mysqlx_port
, which defaults
to 33060 and is used for X Protocol client connections,
there is also a port for internal connections between the
instances in the cluster which is not used for client
connections. This port is configured by the
localAddress
option, which configures the
group_replication_local_address
system variable, and this port must be open so that the
instances in the cluster can communicate with each other.
For example, if your firewall is blocking this port then the
instances cannot communicate with each other, and the
cluster cannot function. Similarly, if your instances are
using SELinux, you need to
ensure that all of the required ports used by
InnoDB Cluster are open so that the instances can
communicate with each other. See
Setting the TCP Port Context for MySQL Features and
MySQL Shell
Ports Reference.
When you create a cluster or add instances to a cluster, by
default the localAddress
port is
calculated by multiplying the target instance's
port
value by 10 and then
adding one to the result. For example, when the
port
of the target instance
is the default value of 3306, the calculated
localAddress
port is 33061. You should
ensure that port numbers used by your cluster instances are
compatible with the way localAddress
is
calculated. For example, if the server instance being used
to create a cluster has a
port
number higher than
6553, the dba.createCluster()
operation
fails because the calculated localAddress
port number exceeds the maximum valid port which is 65535.
To avoid this situation either use a lower
port
value on the instances
you use for InnoDB Cluster, or manually assign the
localAddress
value, for example:
mysql-js> dba.createCluster('testCluster', {'localAddress':'icadmin@ic-1:33061'}