4 Upgrading cnDBTier

This chapter describes the procedure to upgrade an existing cnDBTier deployment.

Note:

  • The OCCNE_NAMESPACE variable in the upgrade procedures must be set to the cnDBTier namespace. Before running any command that contains the OCCNE_NAMESPACEvariable, ensure that you have set this variable to the cnDBTier namespace as stated in the following code block:
    export OCCNE_NAMESPACE=<namespace>

    where, <namespace> is the cnDBTier namespace.

  • The namespace name "occne-cndbtier" given in the upgrade procedures is only an example. Ensure that you configure the namespace name according to your environment.
  • cnDBTier 25.1.103 supports Helm 3.12.3 and 3.13.2. Ensure that you upgrade Helm to a supported version.

4.1 Supported Upgrade Paths

The following table provides the upgrade paths that are supported by cnDBTier Release 25.1.103.

Table 4-1 Supported Upgrade Paths

Source Release Target Release
25.1.1xx 25.1.103
24.3.x 25.1.103
24.2.x 25.1.103
23.4.6, 23.4.7 25.1.103

4.2 Upgrading cnDBTier from Non-TLS to TLS Enabled Version (Replication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is not enabled for replication to a version where TLS is enabled for replication.

Note:

  • In this procedure, the cnDBTier sites are upgraded twice (in step 4 and step 7). Ensure that you follow this procedure as-is to upgrade from a non-TLS version to a TLS enabled version.
  • Upgrade from non-TLS to TLS enabled version is a disruptive procedure that will temporarily impact georeplication in the CNE environment. The LoadBalancer service must be recreated for TLS and non TLS ports to be published. This requires deleting and recreating the db-replication-svc service.
  • The namespace name "occne-cndbtier" given in this procedure is only an example. Ensure that you configure the namespace name according to your environment.
  1. Create the necessary secrets in all the cnDBTier sites by following step 7 of the Creating Secrets procedure.
  2. Ensure that TLS is enabled in the custom_values.yaml file for all cnDBTier sites:
    global:
      tls:    
        enable: true
  3. Provide all the necessary certificates such as CA certificate, client certificate, and server certificate for the respective ndbmysqld pods in custom_values.yaml file for all cnDBTier sites:

    Note:

    Set the TLS mode to NONE for all the cnDBTier sites as seen in the following custom_values.yaml file.
    tls:
      enable: true
      caCertificate: "<ca certificate file name>"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        - name: ndbmysqld-1
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        ...
    For example:
    tls:
      enable: true
      caCertificate: "combine-ca.pem"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        - name: ndbmysqld-1
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        ...
  4. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.
  5. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.
    For example, run the following command to check the georeplication status of cnDBTier cluster2 configured with other remote sites:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.cluster2:8080/db-tier/status/replication/realtime
    
    Sample output:
    [
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster1",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster3",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster4",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      }
    ]

    In the sample output, the replicationStatus is "UP" for the localSiteName cluster2 for remotesiteName cluster1, cluster3, and cluster4. This indicates that the localSiteName cluster2 is able to replicate data from remotesiteName cluster1, cluster3, and cluster4.

  6. In the custom_values.yaml file, set tlsMode to VERIFY_CA or VERIFY_IDENTITY, depending on your requirement, for all the cnDBTier sites. This configuration ensures that the clients use an encrypted connection and performs verification against the server CA certificate:
    • VERIFY_CA instructs the client to check if the server’s certificate is valid.
    • VERIFY_IDENTITY instructs the client to check if the server’s certificate is valid and to check if the host name used by the client, matches the identity in the server’s certificate.
    tls:
      enable: true
      caCertificate: "<ca certificate file name>"
      tlsversion: "TLSv1.3"
      tlsMode: "VERIFY_CA/VERIFY_IDENTITY"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        - name: ndbmysqld-1
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        ...
    For example:
    tls:
      enable: true
      caCertificate: "combine-ca.pem"
      tlsversion: "TLSv1.3"
      tlsMode: "VERIFY_CA"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        - name: ndbmysqld-1
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        ...
  7. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous step.
  8. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.

    For example, see Step 6.

  9. Run the hooks.sh script with the --add-ssltype flag on all cnDBTier sites to set the type of "occnerepluser" as TLS:

    Note:

    Update the values of the environment variables in the following code block as per your cluster.
    export OCCNE_NAMESPACE="occne-cndbtier"
    export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service"
    export MYSQL_USERNAME="occneuser"
    export MYSQL_PASSWORD="<password for the user occneuser>"
    export OCCNE_REPL_USER_NAME="occnerepluser"
    export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0 pod name> -- mysql --binary-as-hex=0 --show-warnings"
      
    occndbtier/files/hooks.sh --add-ssltype

4.3 Upgrading cnDBTier from TLS Enabled to Non-TLS Version (Replication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is enabled for replication to a version where TLS is not enabled for replication.

Note:

  • In this procedure, the cnDBTier sites are upgraded twice (in step 4 and step 7). Ensure that you follow this procedure as-is to upgrade from a non-TLS version to a TLS enabled version.
  • Upgrade from non-TLS to TLS enabled version is a disruptive procedure that will temporarily impact georeplication in the CNE environment. The LoadBalancer service must be recreated for TLS and non TLS ports to be published. This requires deleting and recreating the db-replication-svc service.
  • The namespace name "occne-cndbtier" given in this procedure is only an example. Ensure that you configure the namespace name according to your environment.
  1. Run the hooks.sh script with the -- remove-ssltype flag on all cnDBTier sites to set the type of "occnerepluser" as Non-TLS:

    Note:

    Update the values of the environment variables in the following code block as per your cluster.
    export OCCNE_NAMESPACE="occne-cndbtier"
    export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service"
    export MYSQL_USERNAME="occneuser"
    export MYSQL_PASSWORD="<password for the user occneuser>"
    export OCCNE_REPL_USER_NAME="occnerepluser"
    export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0 pod name> -- mysql --binary-as-hex=0 --show-warnings"
      
    occndbtier/files/hooks.sh --remove-ssltype
  2. In the custom_values.yaml file, set tlsMode to NONE for all the cnDBTier sites:
    tls:
      enable: true
      caCertificate: "<ca certificate file name>"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        - name: ndbmysqld-1
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"     
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        ...
    For example:
    tls:
      enable: true
      caCertificate: "combine-ca.pem"
      tlsversion: "TLSv1.3"
      tlsMode: "NONE"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        - name: ndbmysqld-1
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"     
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        ...
  3. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.
  4. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.
    For example, run the following command to check the georeplication status of cnDBTier cluster2 configured with other remote sites:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.cluster2:8080/db-tier/status/replication/realtime
    
    Sample output:
    [
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster1",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster3",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster4",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      }
    ]

    In the sample output, the replicationStatus is "UP" for the localSiteName cluster2 for remotesiteName cluster1, cluster3, and cluster4. This indicates that the localSiteName cluster2 is able to replicate data from remotesiteName cluster1, cluster3, and cluster4.

  5. Ensure that TLS is disabled in the custom_values.yaml file for all cnDBTier sites:
    global:
      tls:    
        enable: false
  6. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous step.
  7. After upgrading each site, run the following command on the site to ensure that the replication is UP:
    $ kubectl -n <namespace> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.

    For example, see Step 4.

4.4 Upgrading cnDBTier from Non-TLS to TLS Enabled Version (NF Communication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is not enabled in application SQL pods for NF communication to a version where TLS is enabled in application SQL pods for NF communication.

Note:

The namespace name "occne-cndbtier" given in this procedure is only an example. Ensure that you configure the namespace name according to your environment.
  1. Create the necessary secrets in all the cnDBTier sites by following step 8 of the Creating Secrets procedure.
  2. Ensure that /global/ndbappTLS/enable is set to true in the custom_values.yaml file for all cnDBTier sites. This indicates that TLS is enabled in application SQL pods for NF communication:
    global:
      ndbappTLS:    
        enable: true
  3. Provide the necessary certificates by configuring the caCertificate, serverCertificate, and serverCertificateKey parameters in custom_values.yaml file for the respective application SQL pods of all cnDBTier sites:
    ndbappTLS:
      enable: true
      caSecret: cndbtier-ndbapp-trust-store-secret
      serverSecret: cndbtier-ndbapp-server-secret
      tlsversion: "TLSv1.3"
      caCertificate: "<caCertificate file name>"
      serverCertificate: "<serverCertificate file name>"
      serverCertificateKey: "<serverCertificateKey file name>"
    where,
    • <caCertificate name>, is the name of the file containing the CA certificate.
    • <serverCertificate name>, is the name of the file containing the server certificate.
    • <serverCertificateKey name>, is the name of the file containing the server certificate key.
    For example:
    ndbappTLS:
      enable: true
      caSecret: cndbtier-ndbapp-trust-store-secret
      serverSecret: cndbtier-ndbapp-server-secret
      tlsversion: "TLSv1.3"
      caCertificate: "combine-ca.pem"
      serverCertificate: "server1-cert.pem"
      serverCertificateKey: "server1-key.pem"
  4. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous steps.
  5. At this stage, cnDBTier is upgraded and the application SQL pod is configured with TLS certificates. However, the application SQL pod still allows non-TLS communication. To restrict the communication, you must set the mode of the user to X509 such that any NF using the user follows TLS strictly. When the NF user is created, perform the following steps to set the mode of the NF user to X509.

    Note:

    Before performing this step ensure that the NF is ready to communicate with cnDBTier using TLS.
    1. Log in to the ndbappmysqld pod. Enter the password when prompted.
      $ kubectl -n  <namespace of cnDBTier Cluster> exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Example:
      $ kubectl -n cluster1 exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Sample output:
      Enter Password:
    2. Run the following command to check for NF-specific user account. If there are no NF-specific user accounts, create new user accounts:
      $ mysql> select user, host  from mysql.user;
      Example:
      +------------------+-----------+
      | user             | host      |
      +------------------+-----------+
      | occnerepluser    | %         |
      | occneuser        | %         |
      | healthchecker    | localhost |
      | mysql.infoschema | localhost |
      | mysql.session    | localhost |
      | mysql.sys        | localhost |
      | root             | localhost |
      | nfuser           | %         |
      +------------------+-----------+
      7 rows in set (0.00 sec)
    3. Before altering the user and granting privilege, run the following command to turn off binlogging on one of the ndbappmysqld pods:
      $ mysql>SET sql_log_bin = OFF;
    4. Run the following the commands to alter the user and grant the privileges:
      $ mysql> ALTER USER '<USERNAME>'@'%' REQUIRE X509;   
      $ mysql> FLUSH PRIVILEGES;
    5. Turn on the binlogging on the same ndbappmysqld pod before you exit from the session:
      mysql>SET sql_log_bin = ON;
    6. Exit the session:
      $ mysql> exit;

4.5 Upgrading cnDBTier from TLS Enabled to Non-TLS Version (NF Communication)

This section describes the procedure to upgrade cnDBTier clusters from a version where TLS is enabled in application SQL pods for NF communication to a version where TLS is not enabled in application SQL pods for NF communication.

Note:

The namespace name "occne-cndbtier" given in this procedure is only an example. Ensure that you configure the namespace name according to your environment.
  1. Perform the following steps to set the NF user to NONE such that the application SQL pods of cnDBTier accepts both TLS and non-TLS communication from the NF that uses the NF user:
    1. Log in to the ndbappmysqld pod. Enter the password when prompted.
      $ kubectl -n  <namespace of cnDBTier Cluster> exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Example:
      $ kubectl -n cluster1 exec -it ndbappmysqld-0 -- mysql -h 127.0.0.1 -uroot -p
      Sample output:
      Enter Password:
    2. Run the following command to check for NF-specific user account:
      $ mysql> select user, host  from mysql.user;
      Example:
      +------------------+-----------+
      | user             | host      |
      +------------------+-----------+
      | occnerepluser    | %         |
      | occneuser        | %         |
      | healthchecker    | localhost |
      | mysql.infoschema | localhost |
      | mysql.session    | localhost |
      | mysql.sys        | localhost |
      | root             | localhost |
      | nfuser           | %         |
      +------------------+-----------+
      7 rows in set (0.00 sec)
    3. Before altering the user privilege, run the following command to turn off binlogging on one of the ndbappmysqld pods:
      $ mysql>SET sql_log_bin = OFF;
    4. Run the following the commands to alter the user privilege to NONE:
      $ mysql> ALTER USER '<USERNAME>'@'%' REQUIRE NONE;   
      $ mysql> FLUSH PRIVILEGES;
    5. Turn on the binlogging on the same ndbappmysqld pod before you exit from the session:
      mysql>SET sql_log_bin = ON;
    6. Exit the session:
      $ mysql> exit;
  2. Make the NF flexible enough to communicate with cnDBTier using TLS. However, when cnDBTier is upgraded to a non-TLS version, the NF must use non-TLS communication.
  3. Ensure that /global/ndbappTLS/enable is set to false in the custom_values.yaml file for all cnDBTier sites. This indicates that TLS is disabled in application SQL pods for NF communication:
    global:
      ndbappTLS:    
        enable: false
  4. Perform the Upgrading cnDBTier Clusters procedure to upgrade all the cnDBTier sites one after the other using the custom_values.yaml file that you updated in the previous step.

4.6 Upgrading cnDBTier HTTPS Enabled to HTTPS Enabled

This section describes the procedure to upgrade cnDBTier HTTPS disabled version to HTTPS enabled version.

Note:

  • The namespace name "occne-cndbtier" given in this procedure is only an example. Ensure that you configure the namespace name according to your environment.
  • The existing certificates must have been saved, as these will be reused during the upgrade process.
  • In earlier versions, only a p12 file that contained ca-cert.pem, server-cert.pem, and server-key.pem files was required. However, starting from release 25.1.1xx, the following PEM format certificates are supported directly:
    • ca-cert.pem
    • server-cert.pem
    • server-key.pem
    • client-cert.pem
    • client-key.pem
  • The client-cert.pem must be generated using the same CA certificate (ca-cert.pem).
  • Upgrade from n-1 to n or n-2 to n will require the earlier PEM format certificates.

Perform the following steps to enable HTTPS:

  1. Before the upgrade, delete the old secrets and create new secrets in the cnDBTier site. Follow the Creating Secrets section.
    $ kubectl -n  <namespace of cnDBTier Cluster> get secrets 
    $ kubectl -n   <namespace of cnDBTier Cluster> delete secrets cndbtier-https-cert-cred cndbtier-https-cert-file
  2. Change the working directory to /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts directories, where certificates are present:
    $ cd /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts
    $ cat  combine-ca-cert.pem > <path/to/ca-cert.pem>
    $ kubectl -n <namespace of cnDBTier Cluster> create secret generic cndbtier-https-cert-file \
    		--from-file=keystore=<path/to/server-keystore.p12> \
    		--from-file=server-cert.pem=<path/to/server-cert.pem> \
    		--from-file=server-key.pem=<path/to/server-key.pem>\
    		--from-file=client-cert.pem=<path/to/client-cert.pem> \
    		--from-file=client-key.pem=<path/to/client-key.pem>\
    		--from-file=combine-ca-cert.pem=<path/to/combined-ca-cert.pem>
    
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-cred --from-literal="keyalias=<serverAliasName>" --from-literal="keystoretype=<certificate-type>" --from-literal="keystorepassword=<password>" --from-literal="clientkeyalias=<clientAliasName>"

    Sample output:

    $ cd /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts
    $  cat  ca-cert.pem > combined-ca-cert.pem
    $ kubectl -n cluster1 create secret generic cndbtier-https-cert-file \
    		--from-file=keystore=server-keystore.p12 \
    		--from-file=server-cert.pem=server-cert.pem \
    		--from-file=server-key.pem=server-key.pem\
    		--from-file=client-cert.pem=client-cert.pem \
    		--from-file=client-key.pem=client-key.pem \
    		--from-file=combine-ca-cert.pem=combined-ca-cert.pem
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-cred --from-literal="keyalias=serveralias" --from-literal="keystoretype=<certificate-type>" --from-literal="keystorepassword=<password>" --from-literal="clientkeyalias=clientalias" 

4.7 Handling mysql_native_password Plugin Before the Upgrade

From cnDBTier version 25.1.1xx, mysql_native_password plugin is not supported.

This section explains the steps to migrate from mysql_native_password plugin used by earlier MySQL users to caching_sha2_password plugin before performing the upgrade.

Procedure to migrate

  1. Identifying users not using caching_sha2_password:

    Run the following SQL queries using any ndbmysqld or ndbappmysqld pod to check existing user authentication plugins:

    SELECT user, host, plugin FROM mysql.user;
    SELECT user, host, plugin FROM mysql.user WHERE plugin <> 'caching_sha2_password';
    If all users are already using the caching_sha2_password plugin, skip the rest of the steps. Otherwise, proceed with the conversion steps below.
  2. Generating ALTER USER command:

    Use the following bash command to generate the required ALTER USER statements for users not using the caching_sha2_password plugin:

    1. Set the environment variables:
      1. Username stored in secret 'occne-secret-db-monitor-secret must be set to user="<occneuser>" .
      2. Password stored in secret 'occne-secret-db-monitor-secret' must be set to password="<password>".
      3. Use '127.0.0.1' for IPv4 or '::1' for IPv6 must be set to host="<mysql_localhost>".
    2. Set the namespace and container/pod details:
      1. Namespace of the cnDBTier deployment must be set to namespace="<namespace>".
      2. Add a prefix to the MySQL ndb Container name, if applicable, container="<mysql_ndb_cluster_container_name>"
      3. Add a prefix to the pod name for ndbappmysqld-0, if applicable, pod="<ndbappmysqld_pod_name>".
    3. Start the execution:

      Run the following command:

      echo; echo "SET sql_log_bin = OFF;"; kubectl -n "$namespace" exec -it -c "$container" "$pod" -- bash -c "mysql -N -s -h$host -u$user -p$password -e \"SELECT CONCAT(\\\"ALTER USER '\\\", user, \\\"'@'\\\", host, \\\"' IDENTIFIED WITH caching_sha2_password BY '\<your_password_here\>';\\\") FROM mysql.user WHERE plugin <> 'caching_sha2_password';\" 2>/dev/null" | sed '/healthchecker/d'; echo "FLUSH PRIVILEGES;"; echo "SET sql_log_bin = ON;"; echo
  3. Update the root user (if there is a requirement):
    1. If the root user is not already using the caching_sha2_password plugin, run the following SQL commands in MySQL:
      SET sql_log_bin=OFF;
       
      -- Grant temporarily required by cndbtier for password propagation
      GRANT NDB_STORED_USER ON *.* TO 'root'@'localhost';
      FLUSH PRIVILEGES;
       
      SET sql_log_bin=ON;
    2. Update the root password plugin.

      Run the following commands to set the root password:

      SET sql_log_bin = OFF;
       
      ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY '<root_password>';
      FLUSH PRIVILEGES;
       
      SET sql_log_bin = ON;

      Replace <root_password> with the appropriate password.

  4. Remove the temporary root grant:

    Revoke the temporary NDB_STORED_USER grant from the root user:

    SET sql_log_bin=OFF;
     
    REVOKE NDB_STORED_USER ON *.* FROM 'root'@'localhost';
    FLUSH PRIVILEGES;
     
    SET sql_log_bin=ON;
  5. Handle the healthchecker user, if it exists:

    If the healthchecker user exists and is not using caching_sha2_password, then run the following SQL statements:

    SET sql_log_bin=OFF;
     
    GRANT NDB_STORED_USER ON *.* TO 'healthchecker'@'localhost';
    FLUSH PRIVILEGES;
     
    DROP USER IF EXISTS 'healthchecker'@'localhost';
    FLUSH PRIVILEGES;
     
    SET sql_log_bin=ON;
  6. Run the ALTER USER command for all other affected users:

    Run the ALTER USER commands for the remaining users that are not using the caching_sha2_password plugin (for example, occneuser, occnerepluser, NF users):

    SET sql_log_bin = OFF;
     
    SELECT user, host, plugin FROM mysql.user;
     
    ALTER USER 'occneuser'@'%' IDENTIFIED WITH caching_sha2_password BY '<occneuser_password>';
    ALTER USER 'occnerepluser'@'%' IDENTIFIED WITH caching_sha2_password BY '<occnerepluser_password>';
     
    -- Add ALTER USER commands for NF users or any other relevant users
    -- Example:
    -- ALTER USER 'nfuser'@'%' IDENTIFIED WITH caching_sha2_password BY '<nfuser_password>';
     
    FLUSH PRIVILEGES;
    SELECT user, host, plugin FROM mysql.user;
     
    SET sql_log_bin = ON;
  7. Verify Across All the Pods if Users are Using caching_sha2_password Plugin:

    Run the following script to verify that all the users are now using the caching_sha2_password plugin across all the ndbmysqld and ndbappmysqld pods:

    user="occneuser"
    password="<password>"
    host="::1"
     
    for pod in $(
        kubectl -n "$DBTIER_NAMESPACE" get pods -l dbtierapp=ndbmysqld --no-headers -o=custom-columns='NAME:.metadata.name'
        kubectl -n "$DBTIER_NAMESPACE" get pods -l dbtierapp=ndbappmysqld --no-headers -o=custom-columns='NAME:.metadata.name'
    ); do
        echo "Pod: $pod"
        kubectl -n $DBTIER_NAMESPACE exec -it -c mysqlndbcluster "$pod" -- bash -c "
            mysql -h$host -u$user -p$password -e 'SELECT user, host, plugin FROM mysql.user;' 2>/dev/null
        "
        echo
    done

    Replace the <password> field with the actual password for the occneuser.

4.8 Upgrading cnDBTier Clusters

Note:

  • If cnDBTier is configured with single replication channel then upgrade has to be done using single replication channel group. If cnDBTier is configured with multi replication channel group then upgrade has to be done using multi replication channel group.
  • Until cnDBTier 23.4.x version, the Upgrade Service Account required persistentvolumeclaims in its rules.resources. This new rule is necessary for the post-rollback hook to delete mysqld PVCs when rolling back to an earlier mysql release.
  • The db-backup-manager-svc automatically restarts when errors are encountered. Hence, in the case where the backup-manager-svc encounters a temporary error during the upgrade process of cnDBTier, it may undergo several restarts. Once the cnDBTier reaches a stable state, it is expected that the db-backup-manager-svc pod will operate fine without any further restarts.
It is recommended to bring cnDBTier HeartbeatIntervalDbDb value to 1250. If the value of HeartbeatIntervalDbDb is 5000, then follow the below steps to reduce HeartbeatIntervalDbDb to 1250:
  1. Modify the HeartbeatIntervalDbDb to 2500 in custom values file (/global/additionalndbconfigurations/ndb/) and perform cnDBTier upgrade.
  2. Once upgrade is completed successfully Modify the HeartbeatIntervalDbDb to 1250 in custom values file (/global/additionalndbconfigurations/ndb/) and perform cnDBTier upgrade.

If the value of HeartbeatIntervalDbDb is 500, then follow the below steps to increase HeartbeatIntervalDbDb to 1250.

  1. Modify the HeartbeatIntervalDbDb to 900 in custom values file (/global/additionalndbconfigurations/ndb/) and perform a cnDBTier upgrade.
  2. Once the upgrade is completed, modify the HeartbeatIntervalDbDb to 1250 in custom values file (/global/additionalndbconfigurations/ndb/) and perform a cnDBTier upgrade.
When upgrading from any cnDBTier release prior to version 23.4.x, it is essential to deactivate the network policy in the custom_values.yaml file before initiating the upgrade process.
global:
  networkpolicy:
    enabled: false

Following the successful upgrade to cnDBTier version 23.4.x, there is an option to re-enable the network policy in the custom_values.yaml file at their discretion by following the upgrade procedure again.

global:
  networkpolicy:
    enabled: true

Note:

  • If the TLS certificates for replication are being modified while upgrading a cnDBTier cluster from TLS to TLS, refer to the section "Modifying cnDBTier Certificates to Establish TLS Between Georeplication Sites" in Oracle Communications Cloud Network Core, cnDBTier User Guide to add new certificates by retaining existing certificates and then proceed with the upgrade.
  • If the TLS certificates for APP SQL pods are being modified while upgrading a cnDBTier cluster from TLS to TLS, refer to the section "Modifying cnDBTier Certificates to Establish TLS for Communication with NFs" in Oracle Communications Cloud Network Core, cnDBTier User Guide to add new certificates by retaining existing certificates and then proceed with the upgrade.
  • During an upgrade, upgrade only one georedundant cnDBTier site at a time. Upon completion of upgrade move to next georedundant cnDBTier site for upgrade.
  • Starting from the 24.3.x release, files can be encrypted in the data nodes by setting EncryptedFileSystem to 1. Therefore, when upgrading from a release prior to 24.3.x or a TDE-disabled setup to a 24.3.x TDE-enabled setup, first create the secret "occne-tde-encrypted-filesystem-secret" by following the section Creating Secrets.
  • The namespace "occne-cndbtier" is provided only as an indicative purpose. You can configure the name according to your environment.
  • The PVC value must not be changed during an upgrade. If the PVC size needs to be adjusted according to the dimensioning sheet, please follow the vertical scaling procedure to modify the PVC size.

Assumptions

  • All NDB pods of cnDBTier cluster are up and running.
  • Helm limits some parameter (for example, pvc size) to be changed during upgrade, so that such parameters cannot be modified later.
  • The Start Node ID must be same as existing start Node ID. The Starting Node ID can be obtained from the existing cluster using the following command:
    $ kubectl -n ${OCCNE_NAMESPACE} exec ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.94.13  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0, *)
    id=2    @10.233.124.12  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.124.11  (mysql-8.4.4 ndb-8.4.4)
    id=50   @10.233.93.14  (mysql-8.4.4 ndb-8.4.4)
     
    [mysqld(API)]   5 node(s)
    id=56   @10.233.123.15  (mysql-8.4.4 ndb-8.4.4)
    id=57   @10.233.94.14  (mysql-8.4.4 ndb-8.4.4)
    id=70   @10.233.120.20  (mysql-8.4.4 ndb-8.4.4)
    id=71   @10.233.95.22  (mysql-8.4.4 ndb-8.4.4)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    As per the above example, the Start Node ID must be 49 for mgm, 56 for georeplication SQL and 70 for non-georeplication SQL pods.

    Note:

    Node id 222 to node id 225 are shown as "not connected" because these are added as empty slot ids which are used for georeplication recovery.

Upgrading cnDBTier Cluster

Perform the following procedure to upgrade cnDBTier cluster:

  1. Download the latest cnDBTier packages to the Bastion Host. For more information, refer to the Downloading cnDBTier Package section.
  2. If dB encryption and https mode is not enabled before, then disable the https mode and dB encryption before upgrade.

    Configure the https and encryption parameter in the custom_values.yaml file as shown below:

    https:
        enable: false
         
      encryption:
        enable: false
  3. Before doing cnDBTier upgrade, run the Helm test on the current cnDBTier at all sites, only if the Helm test is success on all sites proceed with the upgrade.Verify if the current cnDBTier instance is running correctly by running the following Helm test command on Bastion host:
    $ helm test  mysql-cluster --namespace ${OCCNE_NAMESPACE}

    Sample output:

    NAME: mysql-cluster
    LAST DEPLOYED: Tue May 20 10:22:58 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:   Tue May 20 14:15:18 2025
    Last Completed: Tue May 20 14:17:58 2025
    Phase:          Succeeded
  4. If the upgrade has to be done with fixed loadBalancer IP for external services, then find the IP addresses of the current cnDBTier cluster by running the following command:
    $ kubectl get svc -n ${OCCNE_NAMESPACE}

    Configure the loadBalancer IP addresses obtained from the above command in the custom_values.yaml file by referring to the cnDBTier configurations table given in the section Customizing cnDBTier.

  5. If backup encryption is required to be enabled, then create the backup encryption secrets and subsequently configure "/global/backupencryption/enable" configuration in the custom_values.yaml as true for enabling the backup encryption.
  6. If password encryption is required to be enabled, then create the password encryption secrets and subsequently configure "/global/encryption/enable" configuration in the custom_values.yaml as true for enabling the password encryption.
  7. If password encryption was enabled in the cnDBTier version 23.4.6, then the first disable the password encryption by following the Disabling Password Encryption before upgrading (leapfrog) to version 25.1.1xx.
  8. If Kubernetes version is above or equal to 1.25 and Kyverno is supported or installed on Kubernetes, then run the following commands appropriately:
    • If Kubernetes version is above or equal to 1.25 and Kyverno is supported or installed on Kubernetes and ASM or istio is installed or running on Kubernetes, then run the following command:
      # $ kubectl apply -f namespace/occndbtier_kyvernopolicy_asm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
    • If Kubernetes version is above or equal to 1.25 and Kyverno is supported or installed on Kubernetes and ASM or istio is not installed or running on Kubernetes, then run the following command:
      $ kubectl apply -f namespace/occndbtier_kyvernopolicy_nonasm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
      
  9. If secure transfer of backup(s) to remote server needs to be enabled, then create the remote server user name and private key secrets and subsequently configure the following parameters in the custom_values.yaml file for enabling the secure transfer of backup(s):
    • /global/remotetransfer/enable
    • /global/remotetransfer/faultrecoverybackuptransfer
    • /global/remotetransfer/remoteserverip
    • /global/remotetransfer/remoteserverport
    • /global/remotetransfer/remoteserverpath

Automated cnDBTier upgrade requires a Service Account for pod rolling restart and patch, if you choose to go with automated cnDBTier upgrade with the Service Account, then follow Upgrading cnDBTier Clusters with an Upgrade Service Account. If you choose to upgrade cnDBTier manually without having to use any Service Account, then follow Upgrading cnDBTier Clusters without an Upgrade Service Account.

4.8.1 Upgrading cnDBTier Clusters with an Upgrade Service Account

Perform the following procedure to upgrade cnDBTier clusters with an Upgrade Service Account:

Note:

The namespace "occne-cndbtier" is provided as an indicative purpose. You must configure the namespace name according to your environment.
  1. Create an Upgrade Service Account manually if it does not exist so that cnDBTier does not create automated service account using helm charts. If you have a Service Account created manually with the right role, which you can check in namespace.yaml or rbac.yaml, to use for upgrade, skip this step. Do not perform this step if you want to create service account using cnDBTier Helm charts.
    1. Set the ${OCCNE_RELEASE_NAME} environment variable with the Helm value of RELEASE_NAME which you can find in the NAME column, when you run the command, helm -n ${OCCNE_NAMESPACE}list.
      export OCCNE_RELEASE_NAME="mysql-cluster"
    2. Update the Service Account, Role and Rolebinding for the upgrade in namespace/rbac.yaml file. Depending upon the CSAR package type, the namespace directory can be either found at /Artifacts/Scripts/ or at /Scripts/ relative path.
      sed -i "s/occne-cndbtier/${OCCNE_NAMESPACE}/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
      sed -i "s/mysql-cluster/${OCCNE_RELEASE_NAME}/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
      sed -i "s/cndbtier-upgrade/${OCCNE_RELEASE_NAME}-upgrade/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
      sed -i "s/rolebinding/role/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
    3. Create an Upgrade Service Account, Upgrade Role, and Upgrade Rolebinding by running the following command:
      kubectl -n ${OCCNE_NAMESPACE} apply -f namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
  2. Configure the Upgrade Service Account in your custom_values.yaml file. Follow one of these options:
    1. Set the ${OCCNE_RELEASE_NAME} environment variable with the Helm value of the RELEASE_NAME which you can find in the NAME column, when you run the command, helm -n ${OCCNE_NAMESPACE}list.
      export OCCNE_RELEASE_NAME="mysql-cluster"
      cd /var/occne/cluster/${OCCNE_CLUSTER}
      Update the Service Account information in your custom_values.yaml file.
      sed -i "/  serviceAccountForUpgrade:/,/^$/ { /name:/
            s/cndbtier-upgrade-serviceaccount/${OCCNE_RELEASE_NAME}-upgrade-serviceaccount/ }" occndbtier/custom_values.yaml
    2. If you have a previously created service account manually, modify your custom_values.yaml file to set the global.serviceAccountForUpgrade.create parameter to false and global.serviceAccountForUpgrade.name parameter to the name of your service account. If you have created a service account using Helm in the previous release, retain the same configuration for the global.serviceAccountForUpgrade parameter in the custom_values.yaml file.
    3. If you're upgrading cnDBTier version 24.1.x or fresh install of 23.4.x or fresh install of 23.3.x or upgraded 23.4.x from a fresh install of 23.3.x, you must already have a Service Account for upgrade, and you can keep the custom_values.yaml file with the same values used for the previous installation or upgrade of cnDBTier.
  3. Upgrade cnDBTier by running the following commands:

    Set the ${OCCNE_RELEASE_NAME} environment variable with the Helm value of RELEASE_NAME, the release name is found in the NAME column when you run this command:

    helm -n ${OCCNE_NAMESPACE} list
    export OCCNE_RELEASE_NAME="mysql-cluster"
     
    cd /var/occne/cluster/${OCCNE_CLUSTER}
     
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml
  4. Wait for all the MGM and NDB pods to restart.
  5. Perform a second rollout restart of the NDB pods to apply the changes to the new HeartbeatDbDb. This step is required only if upgrading from a release that doesn't have this support.
    kubectl -n $DBTIER_NAMESPACE rollout restart statefulset ndbmtd
  6. Wait for all the NDB pods to restart.
  7. Verify the cluster status from the management pod by running the following command:
    $ kubectl -n ${OCCNE_NAMESPACE} exec -it ndbmgmd-0 -- ndb_mgm -e show

    Sample output:

    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.104.176  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0)
    id=2    @10.233.121.175  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0, *)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.101.154  (mysql-8.4.4 ndb-8.4.4)
    id=50   @10.233.104.174  (mysql-8.4.4 ndb-8.4.4)
     
    [mysqld(API)]   8 node(s)
    id=56   @10.233.92.169  (mysql-8.4.4 ndb-8.4.4)
    id=57   @10.233.101.155  (mysql-8.4.4 ndb-8.4.4)
    id=70   @10.233.92.170  (mysql-8.4.4 ndb-8.4.4)
    id=71   @10.233.121.176  (mysql-8.4.4 ndb-8.4.4)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    Nodes with id 222 to id 225 will be shown as "not connected" because these are added as empty slot ids which are used for geo replication recovery.
  8. Run the following Helm test command to verify if the cnDBTier services are upgraded successfully:
    $ helm test  mysql-cluster --namespace ${OCCNE_NAMESPACE}

    Sample output:

    NAME: mysql-cluster
    LAST DEPLOYED: Tue May 20 10:22:58 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:   Tue May 20 14:15:18 2025
    Last Completed: Tue May 20 14:17:58 2025
    Phase:          Succeeded
  9. After the successful cnDBTier upgrade, follow the Postinstallation Tasks.

4.8.2 Upgrading cnDBTier Clusters without an Upgrade Service Account

Perform the following procedure to upgrade cnDBTier clusters without an Upgrade Service Account:

Note:

The namespace "occne-cndbtier" is provided as an indicative purpose. You must configure the namespace name according to your environment.
  1. Configure your custom_values.yaml file to not use Upgrade Service Account.
    cd /var/occne/cluster/${OCCNE_CLUSTER}
    Update the Service Account information in your custom_values.yaml file.
    sed -i "/  serviceAccountForUpgrade:/,/^$/ { /create:/ s/true/false/; /name:/
          s/cndbtier-upgrade-serviceaccount// }" occndbtier/custom_values.yaml
    Alternatively, edit your custom_values.yaml file, and manually set global.serviceAccountForUpgrade.create to false and global.serviceAccountForUpgrade.name to "" (empty).
  2. Run the preupgrade script if you are upgrading from a previous cnDBTier release or if you need to enable or disable password encryption. This upgrades the schema.
    Run the following command on the Bastion host to apply the schema changes:

    Note:

    Enabling or disabling encryption may cause a brief disruption to the replication between the sites if a switchover happens between step 2 and step 4. Therefore, perform steps 3 and 4 immediately after completing step 2.
    # replace the values for the environment variables below with the correct ones for your cluster
    export OCCNE_NAMESPACE="occne-cndbtier"
    export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service"
    export MYSQL_USERNAME="occneuser"
    export MYSQL_PASSWORD="<password for the user occneuser>"
    export DBTIER_REPLICATION_SVC_DATABASE="replication_info"
    export DBTIER_BACKUP_SVC_DATABASE="backup_info"
    export DBTIER_HBREPLICAGROUP_DATABASE="hbreplica_info"
    export DBTIER_CLUSTER_INFO_DATABASE="cluster_info"
    export REPLCHANNEL_GROUP_COUNT=<configured number of replication channel groups i.e either 1/2/3>
    export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0/ndbappmysqld-0 pod name> -- mysql --binary-as-hex=0 --show-warnings"
     
    # To enable or disable password encryption, uncomment the below line and set the variable to true or false to enable or disable it.
    # export ENABLE_ENCRYPTION="<true/false>"
     
    occndbtier/files/hooks.sh --schema-upgrade
  3. Perform the following steps on the Bastion host to run the preupgrade procedure.
    1. Replace the values for the following environment variables with the correct ones for your cluster:
      export OCCNE_NAMESPACE="occne-cndbtier"
    2. If pod prefix is being used, then a prefix must be added to the following environment variable for both the pod and stateful set names:

      For example:

      export NDB_MGMD_PODS="<global.k8sResource.pod.prefix>-ndbmgmd-0 <global.k8sResource.pod.prefix>-ndbmgmd-1"
      export APP_STS_NAME="<global.k8sResource.pod.prefix>-ndbappmysqld"
      export NDB_MGMD_PODS="ndbmgmd-0 ndbmgmd-1"
      export APP_STS_NAME="ndbappmysqld"
       
      occndbtier/files/hooks.sh --pre-upgrade
  4. Upgrade cnDBTier by running the following commands:

    Set the ${OCCNE_RELEASE_NAME} environment variable with the Helm value of RELEASE_NAME which you can find in the NAME column when you run this command: helm -n ${OCCNE_NAMESPACE} list:

    export OCCNE_RELEASE_NAME="mysql-cluster"
    cd /var/occne/cluster/${OCCNE_CLUSTER}
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml --no-hooks
  5. Run the post-upgrade script. It deletes all the MGM pods, waits for them to comeup, and patches upgradeStrategy.
    # replace the values for the environment variables below with the correct ones for your cluster
    export OCCNE_NAMESPACE="occne-cndbtier"
    #export API_EMP_TRY_SLOTS_NODE_IDS="id=222"
    export API_EMP_TRY_SLOTS_NODE_IDS="id=222\|id=223\|id=224\|id=225"
    export MGM_NODE_IDS="id=49\|id=50"
    # export all the ndbmtd node ids in the below env variable
    export NDB_NODE_IDS="id=1\|id=2"
    # Export all the ndbmysqld node ids in the below env variable
    # ndbmysqld node_ids starts at global.api.startNodeId and ends at (global.api.startNodeId + global.apiReplicaCount - 1)
    export API_NODE_IDS="id=56\|id=57"
    # if pod prefix is being used, then the prefix must be added to the below env variable for both the the pod and stateful set names
    # example: export NDB_MGMD_PODS="<global.k8sResource.pod.prefix>-ndbmgmd-0 <global.k8sResource.pod.prefix>-ndbmgmd-1"
    # export NDB_STS_NAME="<global.k8sResource.pod.prefix>-ndbmtd"
    export NDB_MGMD_PODS="ndbmgmd-0 ndbmgmd-1"
    export NDB_MTD_PODS="ndbmtd-0 ndbmtd-1"
    export NDB_STS_NAME="ndbmtd"
    export API_STS_NAME="ndbmysqld"
    export APP_STS_NAME="ndbappmysqld"
     
    #If auto scaling for ndbapp sts is enabled then declare the below env variables(NDBAPP_START_NODE_ID and NDBAPP_REPLICA_MAX_COUNT)
    #export NDBAPP_START_NODE_ID="<as configured in values.yaml: global.ndbapp.startNodeId>"
    #export NDBAPP_REPLICA_MAX_COUNT="<as configured in values.yaml: global.ndbappReplicaMaxCount>"    
     
    #If values.global.ndbapp.ndb_cluster_connection_pool is greater than 1 then declare the below env variable(APP_CON_POOL_INGORE_NODE_IDS)
    export APP_CON_POOL_INGORE_NODE_IDS="id=100\|id=101\|id=102 ... \|id=(n-1)\|id=n"
    # Here n can be calculated as n = 100 + (((values.global.ndbapp.ndb_cluster_connection_pool - 1) * values.global.ndbappReplicaMaxCount) - 1)
     
     
    occndbtier/files/hooks.sh --post-upgrade
  6. Wait for all the MGM and NDB pods to restart.
  7. Perform a second rollout restart of the NDB pods to apply the change to the new HeartbeatDbDb. It is only required if upgrading from a release that does not have this support.
    kubectl -n $DBTIER_NAMESPACE rollout restart statefulset ndbmtd
  8. Wait for all the NDB pods to restart.
  9. Verify the cluster status from the management pod by running the following command:
    $ kubectl -n ${OCCNE_NAMESPACE} exec -it ndbmgmd-0 -- ndb_mgm -e show

    Sample output:

    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.104.176  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0)
    id=2    @10.233.121.175  (mysql-8.4.4 ndb-8.4.4, Nodegroup: 0, *)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.101.154  (mysql-8.4.4 ndb-8.4.4)
    id=50   @10.233.104.174  (mysql-8.4.4 ndb-8.4.4)
     
    [mysqld(API)]   8 node(s)
    id=56   @10.233.92.169  (mysql-8.4.4 ndb-8.4.4)
    id=57   @10.233.101.155  (mysql-8.4.4 ndb-8.4.4)
    id=70   @10.233.92.170  (mysql-8.4.4 ndb-8.4.4)
    id=71   @10.233.121.176  (mysql-8.4.4 ndb-8.4.4)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    Node id 222 to id 225 will be shown as "not connected" because these are added as empty slot Ids which are used for geo replication recovery.
  10. Run the following Helm test command to verify if the cnDBTier services are upgraded successfully.
    $ helm test  mysql-cluster --namespace ${OCCNE_NAMESPACE}
    NAME: mysql-cluster
    LAST DEPLOYED: Tue May 20 10:22:58 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:   Tue May 20 14:15:18 2025
    Last Completed: Tue May 20 14:17:58 2025
    Phase:          Succeeded
  11. After the successful cnDBTier upgrade, follow the Postinstallation Tasks section.