4 Upgrading cnDBTier

This chapter describes the procedure to upgrade an existing cnDBTier deployment.

Note:

  • The OCCNE_NAMESPACE variable in the upgrade procedures must be set to the cnDBTier namespace. Before running any command that contains the OCCNE_NAMESPACE variable, ensure that you have set this variable to the cnDBTier namespace as stated in the following code block:
    export OCCNE_NAMESPACE=<namespace>

    where, <namespace> is the cnDBTier namespace.

  • The namespace name "occne-cndbtier" given in the upgrade procedures is only an example. Ensure that you configure the namespace name according to your environment.
  • cnDBTier 25.2.100 supports Helm 3.12.3 and 3.13.2. Ensure that you upgrade Helm to a supported version.

4.1 Supported Upgrade Paths

The following table provides the upgrade paths that are supported by cnDBTier Release 25.2.100.

Table 4-1 Supported Upgrade Paths

Source Release Target Release
25.2.1xx 25.2.100
25.1.2xx 25.2.100
25.1.1xx 25.2.100

4.2 Upgrading cnDBTier from Non-TLS to TLS Enabled Version (Replication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is not enabled for replication to a version where TLS is enabled for replication.

Note:

  • In this procedure, the cnDBTier sites are upgraded twice (in step 4 and step 7). Ensure that you follow this procedure as-is to upgrade from a non-TLS version to a TLS enabled version.
  • The namespace name "occne-cndbtier" given in this procedure is only an example. Ensure that you configure the namespace name according to your environment.
  1. Create the necessary secrets in all the cnDBTier sites by following step 7 of the Creating Secrets procedure.
  2. Ensure that TLS is enabled in the custom_values.yaml file for all cnDBTier sites:
    global:
      tls:    
        enable: true
  3. Provide all the necessary certificates such as CA certificate, client certificate, and server certificate for the respective ndbmysqld pods in custom_values.yaml file for all cnDBTier sites:

    Note:

    Set the TLS mode to NONE for all the cnDBTier sites as seen in the following custom_values.yaml file.
    tls:
      enable: true
      caCertificate: "<ca certificate file name>"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        - name: ndbmysqld-1
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        ...
    For example:
    tls:
      enable: true
      caCertificate: "combine-ca.pem"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        - name: ndbmysqld-1
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        ...
  4. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.
  5. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.
    For example, run the following command to check the georeplication status of cnDBTier cluster2 configured with other remote sites:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.cluster2:8080/db-tier/status/replication/realtime
    
    Sample output:
    [
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster1",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster3",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster4",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      }
    ]

    In the sample output, the replicationStatus is "UP" for the localSiteName cluster2 for remotesiteName cluster1, cluster3, and cluster4. This indicates that the localSiteName cluster2 is able to replicate data from remotesiteName cluster1, cluster3, and cluster4.

  6. In the custom_values.yaml file, set tlsMode to VERIFY_CA or VERIFY_IDENTITY, depending on your requirement, for all the cnDBTier sites. This configuration ensures that the clients use an encrypted connection and performs verification against the server CA certificate:
    • VERIFY_CA instructs the client to check if the server’s certificate is valid.
    • VERIFY_IDENTITY instructs the client to check if the server’s certificate is valid and to check if the host name used by the client, matches the identity in the server’s certificate.
    tls:
      enable: true
      caCertificate: "<ca certificate file name>"
      tlsversion: "TLSv1.3"
      tlsMode: "VERIFY_CA/VERIFY_IDENTITY"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        - name: ndbmysqld-1
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        ...
    For example:
    tls:
      enable: true
      caCertificate: "combine-ca.pem"
      tlsversion: "TLSv1.3"
      tlsMode: "VERIFY_CA"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        - name: ndbmysqld-1
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        ...
  7. Perform the unresolvable-reference.html#GUID-A80B0DC8-240E-451A-900D-910113C88012 procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous step.
  8. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.
    For example, run the following command to check the georeplication status of cnDBTier cluster2 configured with other remote sites:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.cluster2:8080/db-tier/status/replication/realtime
    Sample output:
    [
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster1",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster3",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster4",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      }
    ]
  9. Run the hooks.sh script with the --add-ssltype flag on all cnDBTier sites to set the type of "occnerepluser" as TLS:

    Note:

    Update the values of the environment variables in the following code block as per your cluster.
    export OCCNE_NAMESPACE="occne-cndbtier"
    export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service"
    export MYSQL_USERNAME="occneuser"
    export MYSQL_ACCESS_KEY="<password for the user occneuser>"
    export OCCNE_REPL_USER_NAME="occnerepluser"
    export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0 pod name> -- mysql --binary-as-hex=0 --show-warnings"
      
    occndbtier/files/hooks.sh --add-ssltype

4.3 Upgrading cnDBTier from TLS Enabled to Non-TLS Version (Replication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is enabled for replication to a version where TLS is not enabled for replication.

Note:

  • In this procedure, the cnDBTier sites are upgraded twice (in step 3 and step 6). Ensure that you follow this procedure as-is to upgrade from a TLS enabled version to a non-TLS version.
  • The namespace name "occne-cndbtier" given in this procedure is only an example. Ensure that you configure the namespace name according to your environment.
  1. Run the hooks.sh script with the -- remove-ssltype flag on all cnDBTier sites to set the type of "occnerepluser" as Non-TLS:

    Note:

    Update the values of the environment variables in the following code block as per your cluster.
    export OCCNE_NAMESPACE="occne-cndbtier"
    export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service"
    export MYSQL_USERNAME="occneuser"
    export MYSQL_ACCESS_KEY="<password for the user occneuser>"
    export OCCNE_REPL_USER_NAME="occnerepluser"
    export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0 pod name> -- mysql --binary-as-hex=0 --show-warnings"
      
    occndbtier/files/hooks.sh --remove-ssltype
  2. In the custom_values.yaml file, set tlsMode to NONE for all the cnDBTier sites:
    tls:
      enable: true
      caCertificate: "<ca certificate file name>"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        - name: ndbmysqld-1
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        ...
    For example:
    tls:
      enable: true
      caCertificate: "combine-ca.pem"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        - name: ndbmysqld-1
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        ...
  3. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.
  4. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.
    For example, run the following command to check the georeplication status of cnDBTier cluster2 configured with other remote sites:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.cluster2:8080/db-tier/status/replication/realtime
    
    Sample output:
    [
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster1",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster3",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster4",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      }
    ]

    In the sample output, the replicationStatus is "UP" for the localSiteName cluster2 for remotesiteName cluster1, cluster3, and cluster4. This indicates that the localSiteName cluster2 is able to replicate data from remotesiteName cluster1, cluster3, and cluster4.

  5. Ensure that TLS is disabled in the custom_values.yaml file for all cnDBTier sites:
    global:
      tls:    
        enable: false
  6. Perform the unresolvable-reference.html#GUID-A80B0DC8-240E-451A-900D-910113C88012 procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous step.
  7. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.

    For example output, see Step 4.

4.4 Upgrading cnDBTier from Non-TLS to TLS Enabled Version (NF Communication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is not enabled in application SQL pods for NF communication to a version where TLS is enabled in application SQL pods for NF communication.

  1. Create the necessary secrets in all the cnDBTier sites by following step 8 of the Creating Secrets procedure.
  2. Ensure that /global/ndbappTLS/enable is set to true in the custom_values.yaml file for all cnDBTier sites. This indicates that TLS is enabled in application SQL pods for NF communication:
    global:
      ndbappTLS:    
        enable: true
  3. Provide the necessary certificates by configuring the caCertificate, serverCertificate, and serverCertificateKey parameters in custom_values.yaml file for the respective application SQL pods of all cnDBTier sites:
    ndbappTLS:
      enable: true
      caSecret: cndbtier-ndbapp-trust-store-secret
      serverSecret: cndbtier-ndbapp-server-secret
      tlsversion: "TLSv1.3"
      caCertificate: "<caCertificate file name>"
      serverCertificate: "<serverCertificate file name>"
      serverCertificateKey: "<serverCertificateKey file name>"
    where,
    • <caCertificate name>, is the name of the file containing the CA certificate.
    • <serverCertificate name>, is the name of the file containing the server certificate.
    • <serverCertificateKey name>, is the name of the file containing the server certificate key.
    For example:
    ndbappTLS:
      enable: true
      caSecret: cndbtier-ndbapp-trust-store-secret
      serverSecret: cndbtier-ndbapp-server-secret
      tlsversion: "TLSv1.3"
      caCertificate: "combine-ca.pem"
      serverCertificate: "server1-cert.pem"
      serverCertificateKey: "server1-key.pem"
  4. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.
  5. NF application(s) must connect to cnDBTier using TLS.
  6. At this stage, cnDBTier is upgraded and the application SQL pod is configured with TLS certificates. However, the application SQL pod still allows non-TLS communication. To restrict the communication, you must set the mode of the user to X509 such that any NF using the user follows TLS strictly. When the NF user is created, perform the following steps to set the mode of the NF user to X509.

    Note:

    Before performing this step ensure that the NF is ready to communicate with cnDBTier using TLS.
    1. Log in to the ndbappmysqld pod. Enter the password when prompted.
      $ kubectl -n  <namespace of cnDBTier Cluster> exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Example:
      $ kubectl -n cluster1 exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Sample output:
      Enter Password:
    2. Run the following command to check for NF-specific user account. If there are no NF-specific user accounts, create new user accounts:
      $ mysql> select user, host  from mysql.user;
      Example:
      +------------------+-----------+
      | user             | host      |
      +------------------+-----------+
      | occnerepluser    | %         |
      | occneuser        | %         |
      | healthchecker    | localhost |
      | mysql.infoschema | localhost |
      | mysql.session    | localhost |
      | mysql.sys        | localhost |
      | root             | localhost |
      | nfuser           | %         |
      +------------------+-----------+
      7 rows in set (0.00 sec)
    3. Before altering the user and granting privilege, run the following command to turn off binlogging on one of the ndbappmysqld pods:
      $ mysql>SET sql_log_bin = OFF;
    4. Run the following the commands to alter the user and grant the privileges:
      $ mysql> ALTER USER '<USERNAME>'@'%' REQUIRE X509;   
      $ mysql> FLUSH PRIVILEGES;
    5. Turn on the binlogging on the same ndbappmysqld pod before you exit from the session:
      mysql>SET sql_log_bin = ON;
    6. Exit the session:
      $ mysql> exit;

4.5 Upgrading cnDBTier from TLS Enabled to Non-TLS Version (NF Communication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is enabled in application SQL pods for NF communication to a version where TLS is not enabled in application SQL pods for NF communication.

  1. Perform the following steps to set the NF user to NONE such that the application SQL pods of cnDBTier accepts both TLS and non-TLS communication from the NF that uses the NF user:
    1. Log in to the ndbappmysqld pod. Enter the password when prompted.
      $ kubectl -n  <namespace of cnDBTier Cluster> exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Example:
      $ kubectl -n cluster1 exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Sample output:
      Enter Password:
    2. Run the following command to check for NF-specific user account:
      $ mysql> select user, host  from mysql.user;
      Example:
      +------------------+-----------+
      | user             | host      |
      +------------------+-----------+
      | occnerepluser    | %         |
      | occneuser        | %         |
      | healthchecker    | localhost |
      | mysql.infoschema | localhost |
      | mysql.session    | localhost |
      | mysql.sys        | localhost |
      | root             | localhost |
      | nfuser           | %         |
      +------------------+-----------+
      7 rows in set (0.00 sec)
    3. Before altering the user privilege, run the following command to turn off binlogging on one of the ndbappmysqld pods:
      $ mysql>SET sql_log_bin = OFF;
    4. Run the following the commands to alter the user privilege to NONE:
      $ mysql> ALTER USER '<USERNAME>'@'%' REQUIRE NONE;   
      $ mysql> FLUSH PRIVILEGES;
    5. Turn on the binlogging on the same ndbappmysqld pod before you exit from the session:
      mysql>SET sql_log_bin = ON;
    6. Exit the session:
      $ mysql> exit;
  2. Make the NF flexible enough to communicate with cnDBTier using TLS. However, when cnDBTier is upgraded to a non-TLS version, the NF must use non-TLS communication.
  3. Ensure that /global/ndbappTLS/enable is set to false in the custom_values.yaml file for all cnDBTier sites. This indicates that TLS is disabled in application SQL pods for NF communication:
    global:
      ndbappTLS:    
        enable: false
  4. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous step.

4.6 Upgrading cnDBTier HTTPS Disabled to HTTPS Enabled Version

This section describes the procedure to upgrade cnDBTier HTTPS disabled version to HTTPS enabled version.

Note:

  • While enabling dual protocol, do not assign the same port for both service.httpport and service.httpsport.

    For more information about local site and remote site port configuration parameters, see Global Parameters section.

  • When upgrading from previous version to current version, ensure the value of the service.httpPort parameter is same as the value of the service.port parameter from the earlier version.

Perform the following steps to enable HTTPS:

  1. Create all the necessary secrets in the cnDBTier site by following the step 7 of the Creating Secrets procedure.
  2. Enable the dual protocol by setting the supportDualProtocol parameter to true in the custom_values.yaml file for all the cnDBTier sites:

    Sample:

    global:
      https:
        enable: false
        supportDualProtocol: true
  3. Perform the Upgrading cnDBTier procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.

    Result: cnDBTier has been upgraded with the dual-protocol support and HTTPS is enabled. This allows both HTTP and HTTPS on the specified service.httpport to be used.

    Note:

    After this step, only HTTPS protocol is used for the REST APIs across the cnDBTier sites.
  4. Enable HTTPS as shown below in custom_values.yaml file for all the cnDBTier sites.

    Note:

    If HTTPS is enabled, then update the following ports in the cnDBTier site where HTTPS is configured. For more information, see the Customizing cnDBTier section.
    • local site port to service.httpsport
    • remote site port to service.httpsport

    To enable HTTPS, configure the https parameter to true in the custom_values.yaml file as shown below:

    https:
      enable: true
      supportDualProtocol: true
  5. Upgrade the cnDBTier sites one by one with the updated custom_values.yaml file. Follow the upgrade procedure as per the Upgrading cnDBTier Clusters section.

    Result: cnDBTier is upgraded with dual-protocol support and HTTPS enabled, allowing it to use both HTTP and HTTPS. Currently, HTTPS protocol is used for REST calls across the cnDBTier sites.

  6. Disable the dual-protocol support in the next upgrade.

    Note:

    This step removes the HTTP port so that only HTTPS connection is allowed in cnDBTier. This step can be performed at any phase of the upgrade.
      https:
        enable: true
        supportDualProtocol: false 

4.7 Upgrading cnDBTier HTTPS Enabled to HTTPS Disabled Version

This section describes the procedure to upgrade cnDBTier HTTPS enabled to HTTPS disabled version.

Note:

  • While enabling dual protocol, do not assign the same port for both service.httpPort and service.httpsPort.

    For more information about local site and remote site port configuration parameters, see the Global Parameters section.

  • When upgrading from previous version to current version, ensure the value of the service.httpsPort parameter is same as the value of the service.port parameter from the earlier version. For more information on these parameters, see the Global Parameters section.

Perform the following steps to configure HTTPS:

  1. Enable the dual protocols by setting the supportDualProtocol parameter to true in the custom_values.yaml file for all the cnDBTier sites.

    Sample:

    global:
      https:
        enable: true
        supportDualProtocol: true
  2. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.

    Result: cnDBTier has been upgraded with the dual-protocol support. This allows both HTTP on the specified service.httpport and HTTPS on the specified service.httpsport to be used.

    Note:

    After this step, only HTTPS protocol is used for the REST APIs across the cnDBTier sites.
  3. Disable HTTPS in the custom_values.yaml file for all the cnDBTier sites.
    global: 
      https:
        enable: false
        supportDualProtocol: true

    Note:

    Update the local site port to service.httpport and remote site port to the server.httpport of the other cnDBTier site where HTTP is configured.

    For more information about local site and remote site port configuration, see Global Parameters section.

  4. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.

    Result: cnDBTier has been upgraded with the dual-protocol support. This allows both HTTP on the specified service.httpport and HTTPS on the specified service.httpsport to be used.

    Note:

    After this step, only HTTP protocol is used for the REST APIs across the cnDBTier sites.
  5. Disable the dual-protocol support in the next upgrade.

    Note:

    This step removes the HTTPS port so that only HTTP connection is allowed in cnDBTier. This step can be performed at any phase of the upgrade.
    global: 
      https:
        enable: false
        supportDualProtocol: false

4.8 Upgrading cnDBTier Clusters

Note:

  • If cnDBTier is configured with single replication channel then upgrade has to be done using single replication channel group. If cnDBTier is configured with multi replication channel group then upgrade has to be done using multi replication channel group.
  • As of 23.4.x, the Upgrade Service Account requires persistentvolumeclaims in its rules.resources. This new rule is necessary for the post-rollback hook to delete mysqld PVCs when rolling back to an earlier mysql release.
  • The db-backup-manager-svc automatically restarts when errors are encountered. Hence, in the case where the backup-manager-svc encounters a temporary error during the upgrade process of cnDBTier, it may undergo several restarts. Once the cnDBTier reaches a stable state, it is expected that the db-backup-manager-svc pod will operate fine without any further restarts.
It is recommended to bring cnDBTier HeartbeatIntervalDbDb value to 1250. If the value of HeartbeatIntervalDbDb is 5000, then follow the below steps to reduce HeartbeatIntervalDbDb to 1250:
  1. Modify the HeartbeatIntervalDbDb to 2500 in custom values file (/global/additionalndbconfigurations/ndb/) and perform cnDBTier upgrade.
  2. Once upgrade is completed successfully Modify the HeartbeatIntervalDbDb to 1250 in custom values file (/global/additionalndbconfigurations/ndb/) and perform cnDBTier upgrade.

If the value of HeartbeatIntervalDbDb is 500, then follow the below steps to increase HeartbeatIntervalDbDb to 1250.

  1. Modify the HeartbeatIntervalDbDb to 900 in custom values file (/global/additionalndbconfigurations/ndb/) and perform a cnDBTier upgrade.
  2. Once the upgrade is completed, modify the HeartbeatIntervalDbDb to 1250 in custom values file (/global/additionalndbconfigurations/ndb/) and perform a cnDBTier upgrade.
When upgrading from any cnDBTier release prior to version 23.4.x, it is essential to deactivate the network policy in the custom_values.yaml file before initiating the upgrade process.
global:
  networkpolicy:
    enabled: false

Following the successful upgrade to cnDBTier version 23.4.x, there is an option to re-enable the network policy in the custom_values.yaml file at their discretion by following the upgrade procedure again.

global:
  networkpolicy:
    enabled: true

Note:

  • If the TLS certificates for replication are being modified while upgrading a cnDBTier cluster from TLS to TLS, refer to the section "Modifying cnDBTier certificates for encrypted connection of replication using TLS" to add new certificates by retaining existing certificates and then proceed with the upgrade.
  • If the TLS certificates for APP SQL pods are being modified while upgrading a cnDBTier cluster from TLS to TLS, refer to the section "Modifying cnDBTier certificates for encrypted connection to APP SQL pod using TLS" to add new certificates by retaining existing certificates and then proceed with the upgrade.
  • If upgrading from 24.3.x or 25.2.1xx then configure service.httpsPort (refer mentioned field for current version) same as the value in the earlier version as service.port refer mentioned field of earlier version.
  • If upgrading from 24.3.x with https enabled, then set the following configuration (global/https/clientAuthentication) to 'WANT' and upgrade all the sites one by one. After all the sites are upgraded, set the configuration (global/https/clientAuthentication) to 'NEED' and perform upgrade one more time on all the sites one by one.
    global:
      https:
        clientAuthentication: WANT

If you are upgrading cnDBTier from 24.3.x or 25.1.2xx to current version with https enabled, then ensure to create the secrets given in the Creating Secrets section.

$ kubectl -n  <namespace of cnDBTier Cluster> get secrets
     
$ kubectl -n  <namespace of cnDBTier Cluster> delete secrets cndbtier-https-cert-cred cndbtier-https-cert-file
 
# Change working directory to /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts directories, where certificates has been kept or create cndbtiercerts directory and keep certificates in this cndbtiercerts directory.
 
$ cd /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts
$ cat  combine-ca-cert.pem > <path/to/ca-cert.pem>
$ kubectl -n <namespace of cnDBTier Cluster${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-file /
        --from-file=keystore=<path/to/server-keystore.p12> \
        --from-file=server-cert.pem=<path/to/server-cert.pem> \
        --from-file=server-key.pem=<path/to/server-key.pem>\
        --from-file=client-cert.pem=<path/to/client-cert.pem> \
        --from-file=client-key.pem=<path/to/client-key.pem>\
        --from-file=combine-ca-cert.pem=<path/to/combined-ca-cert.pem>
 
$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-cred --from-literal="keyalias=<serverAliasName>" --from-literal="keystoretype=<certificate-type>" --from-literal="keystorepassword=<password>" --from-literal="clientkeyalias=<clientAliasName>"
 

Example:

$ kubectl -n cluster1 get secrets
     
$ kubectl -n cluster1 delete secrets cndbtier-https-cert-cred cndbtier-https-cert-file
 
# Change working directory to /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts directories, where certificates has been kept or create cndbtiercerts directory and keep certificates in this cndbtiercerts directory.
$ cd /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts
 
 
$  cat  ca-cert.pem > combined-ca-cert.pem
 
$ kubectl -n cluster1 create secret generic cndbtier-https-cert-file \
        --from-file=keystore=server-keystore.p12 \
        --from-file=server-cert.pem=server-cert.pem \
        --from-file=server-key.pem=server-key.pem\
        --from-file=client-cert.pem=client-cert.pem \
        --from-file=client-key.pem=client-key.pem \
        --from-file=combine-ca-cert.pem=combined-ca-cert.pem
 
 
$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-cred --from-literal="keyalias=serveralias" --from-literal="keystoretype=<certificate-type>" --from-literal="keystorepassword=<password>" --from-literal="clientkeyalias=clientalias"

During upgrade you must upgrade only one geo redundant cnDBTier site at a time. Upon completion of upgrade, you must move to the next geo redundant cnDBTier site for upgrade.

From 24.3.x release, you can also encrypt the files in the data nodes by setting EncryptedFileSystem to 1. Therefore, when upgrading from a release prior to 24.3.x or a TDE-disabled setup to a 24.3.x TDE-enabled setup, first create the secret "occne-tde-encrypted-filesystem-secret". Refer to the "Create Secret for encrypting data in data nodes using TDE encryption" section.

The namespace "occne-cndbtier" is provided only as an indicative purpose. You can configure the name according to your environment.

Note: The PVC value must not be changed during an upgrade. If the PVC size needs to be adjusted according to the dimensioning sheet, please follow the vertical scaling procedure to modify the PVC size.

Assumptions

  • All NDB pods of cnDBTier cluster are up and running.
  • Helm limits some parameter (for example, pvc size) to be changed during upgrade, so these type of parameters cannot be changed.
  • The Start Node ID must be same as existing start Node ID

    Starting Node ID can be obtained from the existing cluster using the below command.

    As per the below example Start Node ID must be 49 for mgm, 56 for Geo replication SQL and 70 for non Geo replication SQL pods.

    $ kubectl -n ${OCCNE_NAMESPACE} exec ndbmgmd-0 -- ndb_mgm -e show

    Sample output:

    
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.73.51  (mysql-8.4.5 ndb-8.4.5, Nodegroup: 0)
    id=2    @10.233.74.56  (mysql-8.4.5 ndb-8.4.5, Nodegroup: 0, *)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.74.55  (mysql-8.4.5 ndb-8.4.5)
    id=50   @10.233.84.60  (mysql-8.4.5 ndb-8.4.5)
     
    [mysqld(API)]   10 node(s)
    id=56   @10.233.84.59  (mysql-8.4.5 ndb-8.4.5)
    id=57   @10.233.78.63  (mysql-8.4.5 ndb-8.4.5)
    id=70   @10.233.78.62  (mysql-8.4.5 ndb-8.4.5)
    id=71   @10.233.73.53  (mysql-8.4.5 ndb-8.4.5)
    id=72 (not connected, accepting connect from ndbappmysqld-2.ndbappmysqldsvc.samar1.svc.occne1-arjun-sreenivasalu)
    id=73 (not connected, accepting connect from ndbappmysqld-3.ndbappmysqldsvc.samar1.svc.occne1-arjun-sreenivasalu)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    Node id 222 to node id 225 are shown as "not connected" because these are added as empty slot ids which are used for georeplication recovery.

Alternatively, Helm test can also be used to verify the installation:

$ helm test  mysql-cluster --namespace ${OCCNE_NAMESPACE}

Sample output:

NAME: mysql-cluster
LAST DEPLOYED: Mon Aug 25 10:22:58 2025 
NAMESPACE: occne-cndbtier
STATUS: deployed
REVISION: 1
TEST SUITE:     mysql-cluster-node-connection-test
Last Started:   Mon Aug 25 14:15:18 2025 
Last Completed: Mon Aug 25 14:17:58 2025 
Phase:          Succeeded

Upgrading cnDBTier Cluster

Perform the following procedure to upgrade cnDBTier cluster:

  1. Download the latest cnDBTier packages to Bastion Host.
  2. If dB encryption and https mode is not enabled before, then disable the https mode and dB encryption before upgrade.

    Configure the https and encryption parameter in the custom_values.yaml file as shown below:

    https:
        enable: false
         
      encryption:
        enable: false
  3. Before doing cnDBTier upgrade, run the Helm test on the current cnDBTier at all sites, only if the Helm test is success on all sites proceed with the upgrade.Verify if the current cnDBTier instance is running correctly by running the following Helm test command on Bastion host:
    $ helm test  mysql-cluster --namespace ${OCCNE_NAMESPACE}

    Sample output:

    NAME: mysql-cluster
    LAST DEPLOYED: Tue May 20 10:22:58 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:   Tue May 20 14:15:18 2025
    Last Completed: Tue May 20 14:17:58 2025
    Phase:          Succeeded
  4. If the upgrade has to be done with fixed loadBalancer IP for external services, then find the IP addresses of current cnDBTier cluster by running the following command:
    $ kubectl get svc -n ${OCCNE_NAMESPACE}

    Configure the loadBalancer IP addresses obtained from the above command in custom_values.yaml file by referring to the cnDBTier configurations table.

  5. If backup encryption is required to be enabled, then create the backup encryption secrets and subsequently configure "/global/backupencryption/enable" configuration in the custom_values.yaml as true for enabling the backup encryption.
  6. If password encryption is required to be enabled, then create the password encryption secrets and subsequently configure "/global/encryption/enable" configuration in the custom_values.yaml as true for enabling the password encryption.
  7. If Kubernetes version is above or equal to 1.25 and Kyverno is supported or installed on Kubernetes then run the following commands appropriately:
    # If K8s version is above or equal to 1.25 and Kyverno is supported/installed on K8s and ASM/istio is installed/running on K8s then run the below command
    $ kubectl apply -f namespace/occndbtier_kyvernopolicy_asm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
     
    # If K8s version is above or equal to 1.25 and Kyverno is supported/installed on K8s and ASM/istio is not installed/running on K8s then run the below command
    $ kubectl apply -f namespace/occndbtier_kyvernopolicy_nonasm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
    
  8. If secure transfer of backup(s) to remote server needs to be enabled, then create the remote server user name and private key secrets and subsequently configure the following parameters in the custom_values.yaml file for enabling the secure transfer of backup(s):
    • /global/remotetransfer/enable
    • /global/remotetransfer/faultrecoverybackuptransfer
    • /global/remotetransfer/remoteserverip
    • /global/remotetransfer/remoteserverport
    • /global/remotetransfer/remoteserverpath
  9. Starting from version 25.1.2xx, using dual protocol support, HTTP and HTTPs can be run concurrently on separate ports. Hence, HTTPs can be enabled or disabled using the procedures, Upgrading cnDBTier HTTPS Enabled to HTTPS Disabled Version and Upgrading cnDBTier HTTPS Disabled to HTTPS Enabled Version respectively.

    The following ports must be configured:

    • httpport (default port 80): HTTP traffic will be served on port 80.
    • httpsport (default port 443): HTTPS traffic will be served on port 443.
    This allows for a smooth migration between HTTP and HTTPS (or vice-versa) without service interruption.

    If you are upgrading from a version prior to 25.1.2xx, perform the following procedure:

    • If you are upgrading from HTTPS to HTTPS:

      The httpsport value must be set to the same value as service.port from your previous release. For example, If the service.port was set to 80, then set httpsport to 80 as shown below:

      service:
        type: LoadBalancer
        loadBalancerIP: ""
        httpport: 81
        httpsport: 80
    • If you are upgrading from HTTP to HTTP:

      The httpport value must be set to the same value as service.port from your previous release. For example, if the service.port was set to 80, then set httpport to 80 as shown below:

      service:
        type: LoadBalancer
        loadBalancerIP: ""
        httpport: 80
        httpsport: 443

Automated cnDBTier upgrade needs a Service Account for pod rolling restart and patch, if you choose to go with Automated cnDBTier upgrade with Service Account, then perform the steps provided in the section Upgrading cnDBTier Clusters with an Upgrade Service Account. If you choose to upgrade cnDBTier manually without having to use any Service Account, then perform the steps provided in the section Upgrading cnDBTier Clusters without an Upgrade Service Account.

4.8.1 Upgrading cnDBTier Clusters with an Upgrade Service Account

Perform the following procedure to upgrade cnDBTier clusters with an Upgrade Service Account:

Note:

The namespace "occne-cndbtier" is provided as an indicative purpose. You must configure the namespace name according to your environment.
  1. Create an Upgrade Service Account manually if it does not exist so that cnDBTier does not create automated service account using helm charts. If you have a Service Account created manually with the right role, which you can check in namespace.yaml or rbac.yaml, to use for upgrade, skip this step. Do not perform this step if you want to create service account using cnDBTier Helm charts.
    1. Set the ${OCCNE_RELEASE_NAME} environment variable with the Helm value of RELEASE_NAME which you can find in the NAME column, when you run the command, helm -n ${OCCNE_NAMESPACE}list.
      export OCCNE_RELEASE_NAME="mysql-cluster"
    2. Update the Service Account, Role and Rolebinding for the upgrade in namespace/rbac.yaml file. Depending upon the CSAR package type, the namespace directory can be either found at /Artifacts/Scripts/ or at /Scripts/ relative path.
      sed -i "s/occne-cndbtier/${OCCNE_NAMESPACE}/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
      sed -i "s/mysql-cluster/${OCCNE_RELEASE_NAME}/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
      sed -i "s/cndbtier-upgrade/${OCCNE_RELEASE_NAME}-upgrade/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
      sed -i "s/rolebinding/role/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
    3. Create an Upgrade Service Account, Upgrade Role, and Upgrade Rolebinding by running the following command:
      kubectl -n ${OCCNE_NAMESPACE} apply -f namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
  2. Configure the Upgrade Service Account in your custom_values.yaml file. Follow one of these options:
    • If step 1 is performed or if the Service Account is created manually and you want cnDBTier to use the the same service account for upgrade, then provide the name of that manually created service account in parameter global.autoCreateResources.serviceAccounts.accounts[1].serviceAccountName and configure global.autoCreateResources.serviceAccounts.accounts[1].create flag as false.
      autoCreateResources:
          enabled: false
          serviceAccounts:
            create: false
            accounts:
              - serviceAccountName: ""
                type: APP
                create: true
              - serviceAccountName: "<manually_created_service_account_name>"
                type: UPGRADE
                create: false
    • If a Service Account was created previously by the Helm charts manually, if from this release you want cnDBTier Helm charts to create automated service account, then set the global.serviceAccountForUpgrade.create parameter to true and provide the name of the Service Account different from the manual Service Account existing in the older release. Service account name must be different from manually created service account in previous version in order to support rollback, since rollback requires that manual service account which was configured in older version custom_values.yaml file. Service Account name has to be configured in field global.autoCreateResources.serviceAccounts.accounts[1].serviceAccountName. This field can also be left empty, and in that case cnDBTier Helm chart will create Service Account with the name release-name-upgrade-reader (if the release name is mysql-cluster, then Service Account will be created with the name mysql-cluster-upgrade-reader).
      autoCreateResources:
          enabled: true/false
          serviceAccounts:
            create: true
            accounts:
              - serviceAccountName: ""
                type: APP
                create: true
              - serviceAccountName: "<service_account_name>"
                type: UPGRADE
                create: true
    • If a Service Account was created previously by cnDBTier Helm charts with the global.serviceAccountForUpgrade.create parameter set to true in the previous release (25.1.100 or 24.3.x), retain the configurations for the global.autoCreateResources.serviceAccounts.accounts[1].create parameter in the current release which must be same as the global.serviceAccountForUpgrade.create parameter in the older release and similarly configurations for global.autoCreateResources.serviceAccounts.accounts[1].serviceAccountName must be same as global.serviceAccountForUpgrade.name.
    • If you do not have a serviceAccount created manually or from a previous cnDBTier version, then configure global.autoCreateResources.serviceAccounts.accounts[1].create parameter as true in custom_values file. If global.autoCreateResources.serviceAccounts.accounts[1].serviceAccountName is configured, then upgrade service account will be created with the provided name or cnDBTier will create a Service Account with the default name in case serviceAccountName is set to empty ("") value.
      autoCreateResources:
          enabled: true
          serviceAccounts:
            create: true
            accounts:
              - serviceAccountName: ""
                type: APP
                create: true
              - serviceAccountName: "<service_account_name>"
                type: UPGRADE
                create: true
  3. Upgrade cnDBTier by running the following commands:

    Set the ${OCCNE_RELEASE_NAME} environment variable with the helm value of RELEASE_NAME, the release name is found in the NAME column when you run this command:

    helm -n ${OCCNE_NAMESPACE} list
    export OCCNE_RELEASE_NAME="mysql-cluster"
     
    cd /var/occne/cluster/${OCCNE_CLUSTER}
     
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml
  4. Wait for all the MGM and NDB pods to restart.
  5. Perform a second restart of the NDB pods to apply the changes to the new HeartbeatDbDb. This step is required only if upgrading from a release that does not have this support.
    kubectl -n $DBTIER_NAMESPACE rollout restart statefulset ndbmtd
  6. Wait for all the NDB pods to restart.
  7. Verify the cluster status from the management pod by running the following command:
    $ kubectl -n ${OCCNE_NAMESPACE} exec -it ndbmgmd-0 -- ndb_mgm -e show

    Sample output:

    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.104.176  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0)
    id=2    @10.233.121.175  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0, *)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.101.154  (mysql-8.4.4 ndb-8.4.4)
    id=50   @10.233.104.174  (mysql-8.4.4 ndb-8.4.4)
     
    [mysqld(API)]   8 node(s)
    id=56   @10.233.92.169  (mysql-8.4.4 ndb-8.4.4)
    id=57   @10.233.101.155  (mysql-8.4.4 ndb-8.4.4)
    id=70   @10.233.92.170  (mysql-8.4.4 ndb-8.4.4)
    id=71   @10.233.121.176  (mysql-8.4.4 ndb-8.4.4)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    Nodes with id 222 to id 225 will be shown as "not connected" because these are added as empty slot ids which are used for geo replication recovery.
  8. Run the following Helm test command to verify if the cnDBTier services are upgraded successfully:
    $ helm test  mysql-cluster --namespace ${OCCNE_NAMESPACE}

    Sample output:

    NAME: mysql-cluster
    LAST DEPLOYED: Tue May 20 10:22:58 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:   Tue May 20 14:15:18 2025
    Last Completed: Tue May 20 14:17:58 2025
    Phase:          Succeeded
  9. After the successful cnDBTier upgrade, follow the Postinstallation Tasks section.

4.8.2 Upgrading cnDBTier Clusters without an Upgrade Service Account

Perform the following procedure to upgrade cnDBTier clusters without an Upgrade Service Account:

Note:

The namespace "occne-cndbtier" is provided as an indicative purpose. You must configure the namespace name according to your environment.
  1. Configure your custom_values.yaml file to not use Upgrade Service Account.
    autoCreateResources:
      enabled: true/false
      serviceAccounts:
        create: true
        accounts:
          - serviceAccountName: ""
            type: APP
            create: true
            - serviceAccountName: ""
            type: UPGRADE
            create: false

    Note:

    If the previous release of cnDBTier used a custom_values file where the parameter global.serviceAccountForUpgrade.create was set to true, then the Helm chart may have created a service account during that deployment.

    If you upgrade to a newer version of cnDBTier with the Service Account disabled, that is the parameter global.serviceAccountForUpgrade.create is set to false or removed, then the previously created service account will be deleted during the upgrade.

    As a result, if a rollback is attempted, then the rollback fails as the required Service Account from the earlier version no longer exists.

  2. Run preupgrade script if you are upgrading from a previous cnDBTier release or if you need to enable or disable password encryption. It upgrades the schema.
    Run the following command on the Bastion host to apply the schema changes:

    Note:

    Enabling or disabling encryption may cause a brief disruption to the replication between the sites if a switchover happens between step 2 and step 4. Therefore, perform steps 3 and 4 immediately after completing step 2.
    # replace the values for the environment variables below with the correct ones for your cluster
    export OCCNE_NAMESPACE="occne-cndbtier"
    export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service"
    export MYSQL_USERNAME="occneuser"
    export MYSQL_ACCESS_KEY="<password for the user occneuser>"
    export DBTIER_REPLICATION_SVC_DATABASE="replication_info"
    export DBTIER_BACKUP_SVC_DATABASE="backup_info"
    export DBTIER_HBREPLICAGROUP_DATABASE="hbreplica_info"
    export DBTIER_CLUSTER_INFO_DATABASE="cluster_info"
    export REPLCHANNEL_GROUP_COUNT=<configured number of replication channel groups i.e either 1/2/3>
    export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0/ndbappmysqld-0 pod name> -c <mysqlndbcluster container name> -- mysql --binary-as-hex=0 --show-warnings"
     
    # To enable or disable password encryption, uncomment the below line and set the variable to true or false to enable or disable it.
    # export ENABLE_ENCRYPTION="<true/false>"
     
    occndbtier/files/hooks.sh --schema-upgrade
  3. Perform the following steps on the Bastion host to run the preupgrade procedure.
    # replace the values for the environment variables below with the correct ones for your cluster
    export OCCNE_NAMESPACE="occne-cndbtier"
    # if pod prefix is being used, then the prefix must be added to the below env variable for both the the pod and stateful set names
    # example: export NDB_MGMD_PODS="<global.k8sResource.pod.prefix>-ndbmgmd-0 <global.k8sResource.pod.prefix>-ndbmgmd-1"
    # export APP_STS_NAME="<global.k8sResource.pod.prefix>-ndbappmysqld"
    export NDB_MGMD_PODS="ndbmgmd-0 ndbmgmd-1"
    export APP_STS_NAME="ndbappmysqld"
     
    occndbtier/files/hooks.sh --pre-upgrade
  4. Upgrade cnDBTier by running the following commands:
    # Set the ${OCCNE_RELEASE_NAME} environment variable with the helm value of RELEASE_NAME
    # which you can find in the NAME column when you run this command: helm -n ${OCCNE_NAMESPACE} list
    export OCCNE_RELEASE_NAME="mysql-cluster"
     
    cd /var/occne/cluster/${OCCNE_CLUSTER}
     
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml --no-hooks
    
  5. Run the post-upgrade script. It deletes all the MGM pods, waits for them to comeup, and patches upgradeStrategy.
    # replace the values for the environment variables below with the correct ones for your cluster
    export OCCNE_NAMESPACE="occne-cndbtier"
    #export API_EMP_TRY_SLOTS_NODE_IDS="id=222"
    export API_EMP_TRY_SLOTS_NODE_IDS="id=222\|id=223\|id=224\|id=225"
    export MGM_NODE_IDS="id=49\|id=50"
    # export all the ndbmtd node ids in the below env variable
    export NDB_NODE_IDS="id=1\|id=2"
    # Export all the ndbmysqld node ids in the below env variable
    # ndbmysqld node_ids starts at global.api.startNodeId and ends at (global.api.startNodeId + global.apiReplicaCount - 1)
    export API_NODE_IDS="id=56\|id=57"
    # if pod prefix is being used, then the prefix must be added to the below env variable for both the the pod and stateful set names
    # example: export NDB_MGMD_PODS="<global.k8sResource.pod.prefix>-ndbmgmd-0 <global.k8sResource.pod.prefix>-ndbmgmd-1"
    # export NDB_STS_NAME="<global.k8sResource.pod.prefix>-ndbmtd"
    export NDB_MGMD_PODS="ndbmgmd-0 ndbmgmd-1"
    export NDB_MTD_PODS="ndbmtd-0 ndbmtd-1"
    export NDB_STS_NAME="ndbmtd"
    export API_STS_NAME="ndbmysqld"
    export APP_STS_NAME="ndbappmysqld"
     
    #If auto scaling for ndbapp sts is enabled then declare the below env variables(NDBAPP_START_NODE_ID and NDBAPP_REPLICA_MAX_COUNT)
    #export NDBAPP_START_NODE_ID="<as configured in values.yaml: global.ndbapp.startNodeId>"
    #export NDBAPP_REPLICA_MAX_COUNT="<as configured in values.yaml: global.ndbappReplicaMaxCount>"    
     
    #If values.global.ndbapp.ndb_cluster_connection_pool is greater than 1 then declare the below env variable(APP_CON_POOL_INGORE_NODE_IDS)
    export APP_CON_POOL_INGORE_NODE_IDS="id=100\|id=101\|id=102 ... \|id=(n-1)\|id=n"
    # Here n can be calculated as n = 100 + (((values.global.ndbapp.ndb_cluster_connection_pool - 1) * values.global.ndbappReplicaMaxCount) - 1)
     
     
    occndbtier/files/hooks.sh --post-upgrade
  6. Wait for all the MGM and NDB pods to restart.
  7. Perform a second rollout restart of the NDB pods to apply the changes to the new HeartbeatDbDb. This step is required only if upgrading from a release that does not have this support.
    kubectl -n $DBTIER_NAMESPACE rollout restart statefulset ndbmtd
  8. Wait for all the NDB pods to restart.
  9. Verify the cluster status from the management pod by running the following command:
    $ kubectl -n ${OCCNE_NAMESPACE} exec -it ndbmgmd-0 -- ndb_mgm -e show
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.104.176  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0)
    id=2    @10.233.121.175  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0, *)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.101.154  (mysql-8.4.4 ndb-8.4.4)
    id=50   @10.233.104.174  (mysql-8.4.4 ndb-8.4.4)
     
    [mysqld(API)]   8 node(s)
    id=56   @10.233.92.169  (mysql-8.4.4 ndb-8.4.4)
    id=57   @10.233.101.155  (mysql-8.4.4 ndb-8.4.4)
    id=70   @10.233.92.170  (mysql-8.4.4 ndb-8.4.4)
    id=71   @10.233.121.176  (mysql-8.4.4 ndb-8.4.4)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    Node id 222 to id 225 will be shown as "not connected" because these are added as empty slot Ids which are used for geo replication recovery.
  10. Run the following Helm test command to verify if the cnDBTier services are upgraded successfully.
    $ helm test  mysql-cluster --namespace ${OCCNE_NAMESPACE}

    Sample output:

    NAME: mysql-cluster
    LAST DEPLOYED: Tue May 20 10:22:58 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:   Tue May 20 14:15:18 2025
    Last Completed: Tue May 20 14:17:58 2025
    Phase:          Succeeded
  11. After the successful cnDBTier upgrade, follow the Postinstallation Tasks section.