MySQL Shell 8.0
In the following example, we complete the following tasks using a sandbox deployment with AdminAPI to deploy an InnoDB Cluster with MySQL Router.
Deploying and using local sandbox instances of MySQL allows you to test out the functionality locally, before deployment on your production servers. AdminAPI has built-in functionality for creating sandbox instances that are pre-configured to work with InnoDB Cluster, InnoDB ClusterSet, and InnoDB ReplicaSet in a locally deployed scenario.
This example contains the following sections:
Sandbox instances are only suitable for deploying and running on your local machine for testing purposes.
Install the following components:
MySQL Server: For more information, see Installing MySQL.
MySQL Shell: For more information, see Installing MySQL Shell.
MySQL Router: For more information, see Installing MySQL Router.
To provide tolerance for one failure, create an InnoDB Cluster with three instances. In this example, we will be using three sandbox instances running on the same machine. In a real-world setup, those three instances would be running on different hosts on the network.
To start MySQL Shell, issue:
> mysqlsh
To create and start the MySQL sandbox instances, use the
dba.deploySandboxInstance()
function that is
part of the X AdminAPI. Issue the following three
statements in the MySQL Shell and enter a root password
for each instance:
mysql-js> dba.deploySandboxInstance(3310)
mysql-js> dba.deploySandboxInstance(3320)
mysql-js> dba.deploySandboxInstance(3330)
Use the same root password for all instances.
To create an InnoDB Cluster, complete the following steps:
Connect to the MySQL instance you want to be the primary instance in the InnoDB Cluster by issuing:
mysql-js> shell.connect('root@localhost:3310')
Issue the dba.createCluster()
command to
create the Cluster, and use the assigned variable
cluster
to hold the outputted value:
mysql-js> cluster = dba.createCluster('devCluster')
This command outputs:
A new InnoDB cluster will be created on instance 'localhost:3310'.
Validating instance configuration at localhost:3310...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3310
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:33101'.
Use the localAddress option to override.
Creating InnoDB cluster 'devCluster' on '127.0.0.1:3310'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
<Cluster:devCluster>
Verify that the creation was successful by using the
cluster.status()
function with the assigned
variable cluster
:
mysql-js> cluster.status()
The following status is output:
{ “clusterName”: “devCluster”, “defaultReplicaSet”: { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, “groupInformationSourceMember”: “127.0.0.1:3310” }
The Cluster is up and running but not yet tolerant to a
failure. Add another MySQL Server instances to the Cluster
using the <Cluster>.addInstance()
function:
{ mysql-js> cluster.addInstance('root@localhost:3320') NOTE: The target instance '127.0.0.1:3320' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3320' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): mysql-js> cluster.addInstance('root@localhost:3330') }
Select a recovery method from the prompt. The options are:
Clone: Clones the instance that you are adding to the primary Cluster, deleting any transactions the instance contains. The MySQL Clone plugin is automatically installed. Assuming you are adding either an empty instance (has not processed any transactions) or an instance that contains transactions you prefer not to retain, select the Clone option.
Incremental
recovery: Recovers all transactions
processed by the Cluster to the joining instance using
asynchronous replication. Incremental recovery is
appropriate if you are sure all updates ever processed
by the Cluster were completed with global transaction
IDs (GTID
) enabled. There are no purged
transactions, and the new instance contains the same
GTID
set as the Cluster or a subset of
it.
In this example, select C for Clone:
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C Validating instance configuration at localhost:3320... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3320 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33201'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3310 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:3320 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3320 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s) State recovery already finished for '127.0.0.1:3320' The instance '127.0.0.1:3320' was successfully added to the cluster.
Add the third instance created and again select C for the Clone recovery method:
mysql-js> cluster.addInstance('root@localhost:3330')
Check the status of the Cluster, by issuing:
mysql-js> cluster.status()
This outputs the following:
{ "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3310" } The setup of the InnoDB Cluster was successful!
The Cluster can now tolerate up to one failure. Quit MySQL
Shell by issuing:\q
After MySQL InnoDB Cluster is set up, test the high availability of the Cluster. For this purpose, use MySQL Router. If one instance fails, the MySQL Router updates its routing configuration automatically and ensures that new connections are routed to the remaining instances.
Before MySQL Router can perform the routing operations, make it
aware of the new InnoDB Cluster. To do this, use the
–bootstrap
option and point MySQL Router to the
current R/W
MySQL Server instance (primary
instance) of the Cluster. Store the Router’s configuration
in a folder called mysql-router
using the
-d
option.
Open a terminal in your home directory:
On a Linux system, issue:
[demo-user@losthost]$> mysqlrouter --bootstrap root@localhost:3310 -d mysqlrouter
On a Windows system, issue:
C:\Users\demo-user> mysqlrouter --bootstrap root@localhost:3310 -d mysql-router
MySQL Router then prints the TCP/IP ports that it will use for routing connections. For more information, see Deploying MySQL Router.
When MySQL Router has been successfully configured, start it up in a background thread:
On a Windows system use the start /B
command and point the Router to the configuration file
that was generated by using the
–bootstrap
option:
C:\> start /B mysqlrouter -c %HOMEPATH%\mysql-router\mysqlrouter.conf
Or call the Windows PowerShell
script in
the mysqlrouter
folder, created
previously:
\mysqlrouter\start.ps1
On a Linux system using systemd, issue:
sudo systemctl start mysqlrouter.service
Or on a Linux system, call the Shell
script in the mysqlrouter
folder, created
previously:
/mysqlrouter/start.sh
Now that an InnoDB Cluster and MySQL Router are running, test the Cluster setup.
Instead of connecting to one of the MySQL Server instances directly, connect through the MySQL Router.
Issue the following connection command:
> mysqlsh root@localhost:6446
Provide the root password to connect to the InnoDB Cluster.
Check the status of the InnoDB Cluster by creating a
variable cluster
and assigning it with the
value of the dba.getCluster()
operation:
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.status()
Switch to SQL mode:
mysql-js> \sql
Query the port the instance is running on, by issuing:
mysql-sql> SELECT @@port; +--------+ | @@port | +--------+ | 3310 | +--------+ 1 row in set (0.0007 sec)
Switch back to the JavaScript mode:
mysql-js> \js
Use the dba.killSandboxInstance()
function to
stop the MySQL Server
instance:
dba.killSandboxInstance(3310) Killing MySQL instance... Instance localhost:3310 successfully killed.
Check if MySQL Router is correctly routing traffic by running
SELECT @@port
command against the instance
that was just killed and check the result:
Switch to SQL mode:
mysql-js> \sql
Check the port of MySQL:
mysql-sql> SELECT @@port;
An error is returned; ERROR: 2013 (HY000): Lost
connection to MySQL server during query
. This error
means that the instance running on port 3310 is no longer
running.
Check the port again:
mysql-sql> SELECT @@port;
+--------+
| @@port |
+--------+
| 3320 |
+--------+
This output shows that the instance running on port 3320
was promoted to be the new Read/Write
primary
instance.
Return to the JavaScript mode, and check the status of the Cluster:
mysql-js> cluster.status()
{
"clusterName": "devCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3320",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.",
"topology": {
"127.0.0.1:3310": {
"address": "127.0.0.1:3310",
"memberRole": "SECONDARY",
"mode": "n/a",
"readReplicas": {},
"role": "HA",
"shellConnectError": "MySQL Error 2003: Could not open connection to '127.0.0.1:3310':
Can't connect to MySQL server on '127.0.0.1:3310' (10061)",
"status": "(MISSING)"
},
"127.0.0.1:3320": {
"address": "127.0.0.1:3320",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
},
"127.0.0.1:3330": {
"address": "127.0.0.1:3330",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "127.0.0.1:3320"
}
The MySQL Server instance formally running on port 3310 is
MISSING
.
Restart this instance, by issuing the
dba.startSandboxInstance()
operation with the
port number:
mysql-js> dba.startSandboxInstance(3310)
Checking the status of the Cluster shows that the instance
has been restored as active in the Cluster, but as a
SECONDARY
member:
mysql-js > cluster.status()
{
"clusterName": "devCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3320",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3310": {
"address": "127.0.0.1:3310",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
},
"127.0.0.1:3320": {
"address": "127.0.0.1:3320",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
},
"127.0.0.1:3330": {
"address": "127.0.0.1:3330",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "127.0.0.1:3320"
}
All instances are back online, and the Cluster can tolerate one failure again.