MySQL 8.0 Reference Manual Including MySQL NDB Cluster 8.0
It is possible to restore from an NDB backup to a cluster having a different number of data nodes than the original from which the backup was taken. The following two sections discuss, respectively, the cases where the target cluster has a lesser or greater number of data nodes than the source of the backup.
You can restore to a cluster having fewer data nodes than the original provided that the larger number of nodes is an even multiple of the smaller number. In the following example, we use a backup taken on a cluster having four data nodes to a cluster having two data nodes.
The management server for the original cluster is on
host host10
. The original cluster has
four data nodes, with the node IDs and host names shown
in the following extract from the management
server's config.ini
file:
[ndbd]NodeId
=2HostName
=host2 [ndbd] NodeId=4 HostName=host4 [ndbd] NodeId=6 HostName=host6 [ndbd] NodeId=8 HostName=host8
We assume that each data node was originally started
with ndbmtd
--ndb-connectstring=host10
or the equivalent.
Perform a backup in the normal manner. See Section 23.5.8.2, “Using The NDB Cluster Management Client to Create a Backup”, for information about how to do this.
The files created by the backup on each data node are
listed here, where N
is the
node ID and B
is the backup
ID.
BACKUP-
B
-0.N
.Data
BACKUP-
B
.N
.ctl
BACKUP-
B
.N
.log
These files are found under
BackupDataDir
/BACKUP/BACKUP-
,
on each data node. For the rest of this example, we
assume that the backup ID is 1.
B
Have all of these files available for later copying to the new data nodes (where they can be accessed on the data node's local file system by ndb_restore). It is simplest to copy them all to a single location; we assume that this is what you have done.
The management server for the target cluster is on host
host20
, and the target has two data
nodes, with the node IDs and host names shown, from the
management server config.ini
file
on host20
:
[ndbd] NodeId=3 hostname=host3 [ndbd] NodeId=5 hostname=host5
Each of the data node processes on
host3
and host5
should be started with ndbmtd
-c host20
--initial
or the
equivalent, so that the new (target) cluster starts with
clean data node file systems.
Copy two different sets of two backup files to each of the target data nodes. For this example, copy the backup files from nodes 2 and 4 from the original cluster to node 3 in the target cluster. These files are listed here:
BACKUP-1-0.2.Data
BACKUP-1.2.ctl
BACKUP-1.2.log
BACKUP-1-0.6.Data
BACKUP-1.6.ctl
BACKUP-1.6.log
Then copy the backup files from nodes 6 and 8 to node 5; these files are shown in the following list:
BACKUP-1-0.4.Data
BACKUP-1.4.ctl
BACKUP-1.4.log
BACKUP-1-0.8.Data
BACKUP-1.8.ctl
BACKUP-1.8.log
For the remainder of this example, we assume that the
respective backup files have been saved to the directory
/BACKUP-1
on each of nodes 3 and 5.
On each of the two target data nodes, you must restore
from both sets of backups. First, restore the backups
from nodes 2 and 4 to node 3 by invoking
ndb_restore on
host3
as shown here:
shell>ndb_restore -c host20
shell>--nodeid=2
--backupid=1
--restore-data
--backup-path=/BACKUP-1
ndb_restore -c host20 --nodeid=4 --backupid=1 --restore-data --backup-path=/BACKUP-1
Then restore the backups from nodes 6 and 8 to node 5 by
invoking ndb_restore on
host5
, like this:
shell>ndb_restore -c host20 --nodeid=6 --backupid=1 --restore-data --backup-path=/BACKUP-1
shell>ndb_restore -c host20 --nodeid=8 --backupid=1 --restore-data --backup-path=/BACKUP-1
The node ID specified for a given ndb_restore command is that of the node in the original backup and not that of the data node to restore it to. When performing a backup using the method described in this section, ndb_restore connects to the management server and obtains a list of data nodes in the cluster the backup is being restored to. The restored data is distributed accordingly, so that the number of nodes in the target cluster does not need to be to be known or calculated when performing the backup.
When changing the total number of LCP threads or LQH threads per node group, you should recreate the schema from backup created using mysqldump.
Create the backup of the data. You
can do this by invoking the ndb_mgm
client START BACKUP
command from the
system shell, like this:
shell> ndb_mgm -e "START BACKUP 1"
This assumes that the desired backup ID is 1.
Create a backup of the schema. This step is necessary only if the total number of LCP threads or LQH threads per node group is changed.
shell> mysqldump --no-data --routines --events --triggers --databases > myschema.sql
Once you have created the NDB
native backup using ndb_mgm, you
must not make any schema changes before creating the
backup of the schema, if you do so.
Copy the backup directory to the new cluster. For
example if the backup you want to restore has ID 1 and
BackupDataDir
=
/backups/node_
,
then the path to the backup on this node is
nodeid
/backups/node_1/BACKUP/BACKUP-1
.
Inside this directory there are three files, listed
here:
BACKUP-1-0.1.Data
BACKUP-1.1.ctl
BACKUP-1.1.log
You should copy the entire directory to the new node.
If you needed to create a schema file, copy this to a location on an SQL node where it can be read by mysqld.
There is no requirement for the backup to be restored from a specific node or nodes.
To restore from the backup just created, perform the following steps:
Restore the schema.
If you created a separate schema backup file using mysqldump, import this file using the mysql client, similar to what is shown here:
shell> mysql < myschema.sql
When importing the schema file, you may need to
specify the --user
and
--password
options
(and possibly others) in addition to what is shown,
in order for the mysql client to
be able to connect to the MySQL server.
If you did not need to create a
schema file, you can re-create the schema using
ndb_restore
--restore-meta
(short form -m
), similar to what is
shown here:
shell> ndb_restore --nodeid=1 --backupid=1 --restore-meta --backup-path=/backups/node_1/BACKUP/BACKUP-1
ndb_restore must be able to
contact the management server; add the
--ndb-connectstring
option if and as needed to make this possible.
Restore the data. This needs to be done once for each data node in the original cluster, each time using that data node's node ID. Assuming that there were 4 data nodes originally, the set of commands required would look something like this:
ndb_restore --nodeid=1 --backupid=1 --restore-data --backup-path=/backups/node_1/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=2 --backupid=1 --restore-data --backup-path=/backups/node_2/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=3 --backupid=1 --restore-data --backup-path=/backups/node_3/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=4 --backupid=1 --restore-data --backup-path=/backups/node_4/BACKUP/BACKUP-1 --disable-indexes
These can be run in parallel.
Be sure to add the
--ndb-connectstring
option as needed.
Rebuild the indexes. These were
disabled by the
--disable-indexes
option used in the commands just shown. Recreating the
indexes avoids errors due to the restore not being
consistent at all points. Rebuilding the indexes can
also improve performance in some cases. To rebuild the
indexes, execute the following command once, on a single
node:
shell> ndb_restore --nodeid=1 --backupid=1 --backup-path=/backups/node_1/BACKUP/BACKUP-1 --rebuild-indexes
As mentioned previously, you may need to add the
--ndb-connectstring
option, so that ndb_restore can
contact the management server.