MySQL NDB Cluster 7.2 Release Notes
MySQL NDB Cluster 7.2.11 is a new release of NDB Cluster,
incorporating new features in the NDB
storage engine, and fixing recently discovered bugs in previous
MySQL NDB Cluster 7.2 releases.
Obtaining MySQL NDB Cluster 7.2. MySQL NDB Cluster 7.2 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.5 through MySQL 5.5.29 (see Changes in MySQL 5.5.29 (2012-12-21, General Availability)).
Following an upgrade to MySQL NDB Cluster 7.2.7 or later, it was
not possible to downgrade online again to any previous version,
due to a change in that version in the default size (number of
LDM threads used) for NDB
table
hash maps. The fix for this issue makes the size configurable,
with the addition of the
DefaultHashMapSize
configuration parameter.
To retain compatibility with an older release that does not
support large hash maps, you can set this parameter in the
cluster' config.ini
file to the value
used in older releases (240) before performing an upgrade, so
that the data nodes continue to use smaller hash maps that are
compatible with the older release. You can also now employ this
parameter in MySQL NDB Cluster 7.0 and MySQL NDB Cluster 7.1 to
enable larger hash maps prior to upgrading to MySQL NDB Cluster
7.2. For more information, see the description of the
DefaultHashMapSize
parameter.
(Bug #14800539)
References: See also: Bug #14645319.
Important Change; NDB Cluster APIs:
When checking—as part of evaluating an
if
predicate—which error codes should
be propagated to the application, any error code less than 6000
caused the current row to be skipped, even those codes that
should have caused the query to be aborted. In addition, a scan
that aborted due to an error from DBTUP
when
no rows had been sent to the API caused DBLQH
to send a SCAN_FRAGCONF
signal rather than a
SCAN_FRAGREF
signal to
DBTC
. This caused DBTC
to
time out waiting for a SCAN_FRAGREF
signal
that was never sent, and the scan was never closed.
As part of this fix, the default ErrorCode
value used by
NdbInterpretedCode::interpret_exit_nok()
has been changed from 899 (Rowid already
allocated) to 626 (Tuple did not
exist). The old value continues to be supported for
backward compatibility. User-defined values in the range
6000-6999 (inclusive) are also now supported. You should also
keep in mind that the result of using any other
ErrorCode
value not mentioned here is not
defined or guaranteed.
See also The NDB Communication Protocol, and NDB Kernel Blocks, for more information. (Bug #16176006)
NDB Cluster APIs:
The Ndb::computeHash()
API
method performs a malloc()
if no buffer is
provided for it to use. However, it was assumed that the memory
thus returned would always be suitably aligned, which is not
always the case. Now when malloc()
provides a
buffer to this method, the buffer is aligned after it is
allocated, and before it is used.
(Bug #16484617)
When using tables having more than 64 fragments in a MySQL NDB
Cluster where multiple TC threads were configured (on data nodes
running ndbmtd, using
ThreadConfig
),
AttrInfo
and KeyInfo
memory could be freed prematurely, before scans relying on these
objects could be completed, leading to a crash of the data node.
(Bug #16402744)
References: See also: Bug #13799800. This issue is a regression of: Bug #14143553.
When started with --initial
and
an invalid --config-file
(-f
) option, ndb_mgmd
removed the old configuration cache before verifying the
configuration file. Now in such cases,
ndb_mgmd first checks for the file, and
continues with removing the configuration cache only if the
configuration file is found and is valid.
(Bug #16299289)
Executing a DUMP 2304
command during a data node restart could cause the data node to
crash with a Pointer too large error.
(Bug #16284258)
Including a table as a part of a pushed join should be rejected if there are outer joined tables in between the table to be included and the tables with which it is joined with; however the check as performed for any such outer joined tables did so by checking the join type against the root of the pushed query, rather than the common ancestor of the tables being joined. (Bug #16199028)
References: See also: Bug #16198866.
Some queries were handled differently with
ndb_join_pushdown
enabled, due
to the fact that outer join conditions were not always pruned
correctly from joins before they were pushed down.
(Bug #16198866)
References: See also: Bug #16199028.
Data nodes could fail during a system restart when the host ran
short of memory, due to signals of the wrong types
(ROUTE_ORD
and
TRANSID_AI_R
) being sent to the
DBSPJ
kernel block.
(Bug #16187976)
Attempting to perform additional operations such as ADD
COLUMN
as part of an
ALTER
[ONLINE | OFFLINE] TABLE ... RENAME ...
statement is
not supported, and now fails with an
ER_NOT_SUPPORTED_YET error.
(Bug #16021021)
The mysql.server script exited with an error
if the status
command was executed with
multiple servers running.
(Bug #15852074)
Due to a known issue in the MySQL Server, it is possible to drop
the PERFORMANCE_SCHEMA
database. (Bug
#15831748) In addition, when executed on a MySQL Server acting
as a MySQL NDB Cluster SQL node,
DROP
DATABASE
caused this database to be dropped on all SQL
nodes in the cluster. Now, when executing a distributed drop of
a database, NDB
does not delete
tables that are local only. This prevents MySQL system databases
from being dropped in such cases.
(Bug #14798043)
References: See also: Bug #15831748.
When performing large numbers of DDL statements (100 or more) in
succession, adding an index to a table sometimes caused
mysqld to crash when it could not find the
table in NDB
. Now when this problem
occurs, the DDL statement should fail with an appropriate error.
A workaround in such cases may be to create the table with the
index as part of the initial CREATE
TABLE
, rather than adding the index in a subsequent
ALTER TABLE
statement.
(Bug #14773491)
Executing OPTIMIZE TABLE
on an
NDB
table containing
TEXT
or
BLOB
columns could sometimes
cause mysqld to fail.
(Bug #14725833)
Executing a DUMP 1000
command that contained extra or malformed arguments could lead
to data node failures.
(Bug #14537622)
Exhaustion of
LongMessageBuffer
memory
under heavy load could cause data nodes running
ndbmtd to fail.
(Bug #14488185)
The ndb_mgm client HELP
command did not show the complete syntax for the
REPORT
command.