Big Data Service 3.0.27 has added additional features

  • Services: Big Data
  • Release Date: May 21, 2024

To view the ODH version, JDK version, and OS version for this release, see Big Data Service Release and Patch Versions.

Big Data Service 3.0.27 has the following new features and updates:

Bug fixes:

  • NameNode start failure occurs during the cleanup of expired delegation tokens from an old Kerberos realm. 
  • Kafka service shuts down on an SSL-enabled secure Kafka cluster after the key tabs are regenerated.
  • Reported in Hadoop community https://issues.apache.org/jira/browse/HDFS-17181.
  • Oozie jobs and service check failures after adding a third NN/RM.
  • Stale alert notifications persisted in the Ambari UI after upgrading the Hive server in ODH, even though the Hive server itself was healthy, which could have been misleading to the users. 
  • Bug in Hive that previously caused failures when executing the 'ANALYZE TABLE table_name' query to collect table statistics. This issue arose due to the thrift MaxMessageSize limit. You can configure the thrift max message size using the 'hive.thrift.client.max.message.size' property. By default, this size is set to 1 GB. 
  • Ranger synchronization failures for users not in the Hadoop group while running Spark jobs. 
  • Hive Metastore server (HMS) logs where Thread ID isn't logged due to thread safety issues. 
  • Fix import functionality; password/secret values won't be imported.
  • Certificate Utility script to support URL validation through OpenSSL.
  • Adding support to view more detailed ODH Patch logs in Console UI. 
  • Removing public groups from default Kafka ranger policies. 
  • Incorrect Ambari alert status reporting in ODH 2.0.
  • Backported OOZIE-3578: MapReduce counters can't be used over 120.
  • Ambari alert for missing hive kerberos properties on Secure Spark Profile cluster, where the alert would prevent the user from adding any services until the alert is fixed. 
  • Flink applications not getting auto-killed after service check failure.
  • If DCAT is configured as an external metastore, it cannot access the metastore after patching the cluster is patched. Missing DMS libraries to access metastores on the patched clusters are added as part of the solution.
  • Logging improvements.