MySQL 5.7 Reference Manual Including MySQL NDB Cluster 7.5 and NDB Cluster 7.6
This section describes the known limitations of InnoDB cluster. As InnoDB cluster uses Group Replication, you should also be aware of its limitations, see Section 17.7.2, “Group Replication Limitations”.
The formatting of results which contain multi-byte characters sometimes do not have correctly aligned columns. Similarly, non-standard character sets are being corrupted in results.
AdminAPI does not support Unix socket connections. MySQL Shell currently does not prevent you from attempting to use socket connections to a cluster, and attempting to use a socket connection to a cluster can cause unexpected results.
The MySQL Shell help describes an invalid URI:
USER[:PASS]@::SOCKET[/DB].
This is invalid because the @
symbol can
not be present if no user information is provided.
If a session type is not specified when creating the global session, MySQL Shell provides automatic protocol detection which attempts to first create a NodeSession and if that fails it tries to create a ClassicSession. With an InnoDB cluster that consists of three server instances, where there is one read-write port and two read-only ports, this can cause MySQL Shell to only connect to one of the read-only instances. Therefore it is recommended to always specify the session type when creating the global session.
When adding non-sandbox server instances (instances which you
have configured manually rather than using
dba.deploySandboxInstance()
) to a cluster,
MySQL Shell is not able to persist any configuration changes
in the instance's configuration file. This leads to one or
both of the following scenarios:
The Group Replication configuration is not persisted in the instance's configuration file and upon restart the instance does not rejoin the cluster.
The instance is not valid for cluster usage. Although the
instance can be verified with
dba.checkInstanceConfiguration()
, and
MySQL Shell makes the required configuration changes in
order to make the instance ready for cluster usage, those
changes are not persisted in the configuration file and so
are lost once a restart happens.
If only a
happens, the instance does not
rejoin the cluster after a restart.
If b
also happens, and you observe that the
instance did not rejoin the cluster after a restart, you
cannot use the recommended
dba.rebootClusterFromCompleteOutage()
in
this situation to get the cluster back online. This is because
the instance loses any configuration changes made by
MySQL Shell, and because they were not persisted, the
instance reverts to the previous state before being configured
for the cluster. This causes Group Replication to stop
responding, and eventually the command times out.
To avoid this problem it is strongly recommended to use
dba.configureLocalInstance()
before adding
instances to a cluster in order to persist the configuration
changes.
Using MySQL server instances configured with the
validate_password plugin and password policy set to
STRONG
causes InnoDB cluster
createCluster()
and MySQL Router bootstrap
operations to fail. This is because the internal user required
for access to the server instance can not be validated.
The MySQL Router --bootstrap
command line option
does not accept IPv6 addresses.
The commercial version of MySQL Router does not have the correct
setting for AppArmor. A work around is to edit the AppArmor
profile configuration file
/etc/apparmor.d/usr.sbin.mysqlrouter
and
modify the line containing /usr/sbin/mysqld
to use the path to MySQL Router, for example
/usr/sbin/mysqlrouter
.
Using the adoptFromGR
option with the
dba.createCluster()
function to create a
cluster based on an existing deployment of Group Replication
fails with an error that the instance is already part of a
replication group. This happens in MySQL Shell's
default wizard mode only. A workaround is to disable wizard
mode by launching mysqlsh with the
--no-wizard
command option.
The use of the
--defaults-extra-file
option to
specify an option file is not supported by InnoDB cluster
server instances. InnoDB cluster only supports a single
option file on instances and no extra option files are
supported. Therefore for any operation working with the
instance's option file the main one should be specified. If
you want to use multiple option files you have to configure
the files manually and make sure they are updated correctly
considering the precedence rules of the use of multiple option
files and ensuring that the desired settings are not
incorrectly overwritten by options in an extra unrecognized
option file.