MySQL NDB Cluster 7.2 Release Notes
MySQL NDB Cluster 7.2.4 is the first General
Availability release in the MySQL NDB Cluster 7.2
release series, incorporating new features in the
NDB
storage engine and fixing
recently discovered bugs in previous MySQL NDB Cluster 7.2
development releases.
Obtaining MySQL NDB Cluster 7.2. MySQL NDB Cluster 7.2 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.5 through MySQL 5.5.19 (see Changes in MySQL 5.5.19 (2011-12-08, General Availability)).
Important Change:
A mysqld process joining a MySQL NDB Cluster
where distributed privileges are in use now automatically
executes a FLUSH PRIVILEGES
as
part of the connection process, so that the cluster's
distributed privileges take immediate effect on the new SQL
node.
(Bug #13340854)
Packaging:
RPM distributions of MySQL NDB Cluster 7.1 contained a number of
packages which in MySQL NDB Cluster 7.2 have been merged into
the MySQL-Cluster-server
RPM. However, the
MySQL NDB Cluster 7.2 MySQL-Cluster-server
RPM did not actually obsolete these packages, which meant that
they had to be removed manually prior to performing an upgrade
from a MySQL NDB Cluster 7.1 RPM installation.
These packages include the
MySQL-Cluster-clusterj
,
MySQL-Cluster-extra
,
MySQL-Cluster-management
,
MySQL-Cluster-storage
, and MySQL NDB
Cluster-tools
RPMs.
For more information, see Installing NDB Cluster from RPM. (Bug #13545589)
NDB Replication:
Under certain circumstances, the Rows
count
in the output of SHOW TABLE
STATUS
for a replicated slave
NDB
table could be misreported as
many times larger than the result of
SELECT
COUNT(*)
on the same table.
(Bug #13440282)
Accessing a table having a BLOB
column but no primary key following a restart of the SQL node
failed with Error 1 (Unknown error code).
(Bug #13563280)
At the beginning of a local checkpoint, each data node marks its local tables with a “to be checkpointed” flag. A failure of the master node during this process could cause either the LCP to hang, or one or more data nodes to be forcibly shut down. (Bug #13436481)
A node failure while a ANALYZE
TABLE
statement was executing resulted in a hung
connection (and the user was not informed of any error that
would cause this to happen).
(Bug #13416603)
References: See also: Bug #13407848.