MySQL NDB Cluster 7.2 Release Notes

30 Changes in MySQL NDB Cluster 7.2.9 (5.5.28-ndb-7.2.9) (2012-11-22, General Availability)

Note

MySQL NDB Cluster 7.2.9 was withdrawn shortly after release, due to a problem with primary keys and tables with very many rows that was introduced in this release (Bug #16023068, Bug #67928). Users should upgrade to MySQL NDB Cluster 7.2.10, which fixes this issue.

MySQL NDB Cluster 7.2.9 is a new release of NDB Cluster, incorporating new features in the NDB storage engine, and fixing recently discovered bugs in previous MySQL NDB Cluster 7.2 releases.

Obtaining MySQL NDB Cluster 7.2.  MySQL NDB Cluster 7.2 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.

This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.5 through MySQL 5.5.28 (see Changes in MySQL 5.5.28 (2012-09-28, General Availability)).

Functionality Added or Changed

  • Important Change; MySQL NDB ClusterJ: A new CMake option WITH_NDB_JAVA is introduced. When this option is enabled, the MySQL NDB Cluster build is configured to include Java support, including support for ClusterJ. If the JDK cannot be found, CMake fails with an error. This option is enabled by default; if you do not wish to compile Java support into MySQL NDB Cluster, you must now set this explicitly when configuring the build, using -DWITH_NDB_JAVA=OFF. (Bug #12379755)

  • Added 3 new columns to the transporters table in the ndbinfo database. The remote_address, bytes_sent, and bytes_received columns help to provide an overview of data transfer across the transporter links in a MySQL NDB Cluster. This information can be useful in verifying system balance, partitioning, and front-end server load balancing; it may also be of help when diagnosing network problems arising from link saturation, hardware faults, or other causes. (Bug #14685458)

  • Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)

Bugs Fixed

  • NDB Disk Data: Concurrent DML and DDL operations against the same NDB table could cause mysqld to crash. (Bug #14577463)

  • NDB Replication: When the value of ndb_log_apply_status was set to 1, it was theoretically possible for the ndb_apply_status table's server_id column not to be propagated correctly. (Bug #14772503)

  • NDB Replication: Transactions originating on a replication master are applied on slaves as if using AO_AbortError, but transactions replayed from a binary log were not. Now transactions being replayed from a log are handled in the same way as those coming from a live replication master.

    See NdbOperation::AbortOption, for more information. (Bug #14615095)

  • A slow filesystem during local checkpointing could exert undue pressure on DBDIH kernel block file page buffers, which in turn could lead to a data node crash when these were exhausted. This fix limits the number of table definition updates that DBDIH can issue concurrently. (Bug #14828998)

  • The management server process, when started with --config-cache=FALSE, could sometimes hang during shutdown. (Bug #14730537)

  • The output from ndb_config --configinfo now contains the same information as that from ndb_config --configinfo --xml, including explicit indicators for parameters that do not require restarting a data node with --initial to take effect. In addition, ndb_config indicated incorrectly that the LogLevelCheckpoint data node configuration parameter requires an initial node restart to take effect, when in fact it does not; this error was also present in the MySQL NDB Cluster documentation, where it has also been corrected. (Bug #14671934)

  • Attempting to restart more than 5 data nodes simultaneously could cause the cluster to crash. (Bug #14647210)

  • In MySQL NDB Cluster 7.2.7, the size of the hash map was increased to 3840 LDM threads. However, when upgrading a MySQL NDB Cluster from a previous release, existing tables could not use or be modified online to take advantage of the new size, even when the number of fragments was increased by (for example) adding new data nodes to the cluster. Now in such cases, following an upgrade and (once the number of fragments has been increased), you can run ALTER TABLE ... REORGANIZE PARTITION on tables that were created in MySQL NDB Cluster 7.2.6 or earlier, after which they can use the larger hash map size. (Bug #14645319)

  • Concurrent ALTER TABLE with other DML statements on the same NDB table returned Got error -1 'Unknown error code' from NDBCLUSTER. (Bug #14578595)

  • CPU consumption peaked several seconds after the forced termination an NDB client application due to the fact that the DBTC kernel block waited for any open transactions owned by the disconnected API client to be terminated in a busy loop, and did not break between checks for the correct state. (Bug #14550056)

  • Receiver threads could wait unnecessarily to process incomplete signals, greatly reducing performance of ndbmtd. (Bug #14525521)

  • On platforms where epoll was not available, setting multiple receiver threads with the ThreadConfig parameter caused ndbmtd to fail. (Bug #14524939)

  • Setting BackupMaxWriteSize to a very large value as compared with DiskCheckpointSpeed caused excessive writes to disk and CPU usage. (Bug #14472648)

  • Added the --connect-retries and --connect-delay startup options for ndbd and ndbmtd. --connect-retries (default 12) controls how many times the data node tries to connect to a management server before giving up; setting it to -1 means that the data node never stops trying to make contact. --connect-delay sets the number of seconds to wait between retries; the default is 5. (Bug #14329309, Bug #66550)

  • Following a failed ALTER TABLE ... REORGANIZE PARTITION statement, a subsequent execution of this statement after adding new data nodes caused a failure in the DBDIH kernel block which led to an unplanned shutdown of the cluster.

    DUMP code 7019 was added as part of this fix. It can be used to obtain diagnostic information relating to a failed data node. See DUMP 7019, for more information. (Bug #14220269)

    References: See also: Bug #18550318.

  • It was possible in some cases for two transactions to try to drop tables at the same time. If the master node failed while one of these operations was still pending, this could lead either to additional node failures (and cluster shutdown) or to new dictionary operations being blocked. This issue is addressed by ensuring that the master will reject requests to start or stop a transaction while there are outstanding dictionary takeover requests. In addition, table-drop operations now correctly signal when complete, as the DBDICT kernel block could not confirm node takeovers while such operations were still marked as pending completion. (Bug #14190114)

  • The DBSPJ kernel block had no information about which tables or indexes actually existed, or which had been modified or dropped, since execution of a given query began. Thus, DBSPJ might submit dictionary requests for nonexistent tables or versions of tables, which could cause a crash in the DBDIH kernel block.

    This fix introduces a simplified dictionary into the DBSPJ kernel block such that DBSPJ can now check reliably for the existence of a particular table or version of a table on which it is about to request an operation. (Bug #14103195)

  • Previously, it was possible to store a maximum of 46137488 rows in a single MySQL NDB Cluster partition. This limitation has now been removed. (Bug #13844405, Bug #14000373)

    References: See also: Bug #13436216.

  • When using ndbmtd and performing joins, data nodes could fail where ndbmtd processes were configured to use a large number of local query handler threads (as set by the ThreadConfig configuration parameter), the tables accessed by the join had a large number of partitions, or both. (Bug #13799800, Bug #14143553)