7 Maintenance Procedures

This section provides the procedures for managing and maintaining cnDBTier.

Note:

  • The "occne-cndbtier" namespace name used in the procedures is only an example. Ensure that you configure the namespace name according to your environment.
  • The OCCNE_NAMESPACE variable in the maintenance procedures is only an example. Before running any command that contains the OCCNE_NAMESPACE variable, ensure that you have set this variable to the cnDBTier namespace as stated in the following code block:
    export OCCNE_NAMESPACE=<namespace>

    where, <namespace> is the cnDBTier namespace.

  • cnDBTier doesn't support monitoring or alarming about any certificate expiry. Ensure that you track the certificates and renew them before it expires. Refer to the following procedures depending on the certificates you want to renew:

Note:

In the event that a script fails after completing multiple phases of an operation, use manual procedures to continue from the phase the script stopped running or initiate the fatal georeplication recovery (GRR) to recover the setup.

7.1 Starting or Stopping cnDBTier

This section provides the procedures to start or stop cnDBTier.

7.1.1 Starting cnDBTier

To start cnDBTier, perform the following steps:
  1. Run the following command to scale up the management nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbmgmd --replicas=<replica count for MGM pods>
    Example:
    $ kubectl -n occne-cndbtier scale sts ndbmgmd --replicas=2
  2. Run the following command to scale up the data nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbmtd --replicas=<replica count for DATA pods>
    Example:
    $ kubectl -n occne-cndbtier scale sts ndbmtd --replicas=4
  3. Run the following command to scale up the non-georeplication SQL nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbappmysqld --replicas=<replica count for non geo SQL pods>
    
    Example:
    $ kubectl -n cluster1 scale sts ndbappmysqld --replicas=2
  4. Run the following command to scale up the SQL nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbmysqld --replicas=<replica count for SQL pods>
    Example:
    $ kubectl -n cluster1 scale sts ndbmysqld --replicas=2
  5. Run the following commands to scale up db-monitor-svc, db-replication-svc, and db-backup-manager-svc:
    1. To scale up the db-monitor-svc:
      $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale deploy <cnDBTier monitor svc deployment name> --replicas=1
      Example:
      $ kubectl -n occne-cndbtier scale deploy mysql-cluster-db-monitor-svc --replicas=1
    2. To scale up the db-replication-svc:
      $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale deploy <cnDBTier replication svc deployment name> --replicas=1
      Example:
      $ kubectl -n occne-cndbtier scale deploy mysql-cluster-Cluster1-Cluster2-replication-svc --replicas=1
    3. To scale up the db-backup-manager-svc:
      $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale deploy <cnDBTier backup manager svc deployment name> --replicas=1
      Example:
      $ kubectl -n occne-cndbtier scale deploy mysql-cluster-db-backup-manager-svc --replicas=1
  6. Run the following command on each mate site to check the replication status:
    $ kubectl -n <namespace of cnDBTier cluster> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace of cnDBTier cluster>:8080/db-tier/status/replication/realtime
    
    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site. In this case, perform a georeplication recovery. For georeplication recovery procedure, see the "Restoring Georeplication Failure" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    For example, run the following command to check the georeplication status of cnDBTier cluster2 configured with other remote sites:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.cluster2:8080/db-tier/status/replication/realtime
    
    Sample output:
    [
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster1",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster3",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster4",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      }
    ]    

    In the sample output, the replicationStatus is "UP" for the localSiteName cluster2 for remotesiteName cluster1, cluster3, and cluster4. This indicates that the localSiteName cluster2 is able to replicate data from remotesiteName cluster1, cluster3, and cluster4.

Note:

If georeplication is enabled, then run this procedure on all sites.

7.1.2 Stopping cnDBTier

To stop cnDBTier, perform the following steps:
  1. Run the following command to scale down the non-georeplication SQL nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbappmysqld --replicas=0
    For example:
    $ kubectl -n cluster1 scale sts ndbappmysqld --replicas=0
  2. Run the following commands to scale down db-monitor-svc, db-replication-svc, and db-backup-manager-svc:
    1. Run the following commands to scale down db-monitor-svc:
      $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale deploy <cnDBTier monitor svc deployment name> --replicas=0
      For example:
      $ kubectl -n cluster1 scale deploy mysql-cluster-db-monitor-svc --replicas=0
    2. Run the following commands to scale down db-replication-svc:
      $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale deploy <cnDBTier replication svc deployment name> --replicas=0
      For example:
      $ kubectl -n cluster1 scale deploy mysql-cluster-Cluster1-Cluster2-replication-svc --replicas=0
    3. Run the following commands to scale down db-backup-manager-svc:
      $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale deploy <cnDBTier backup manager svc deployment name> --replicas=0
      For example:
      $ kubectl -n cluster1 scale deploy mysql-cluster-db-backup-manager-svc --replicas=0
  3. Run the following command to scale down the SQL nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbmysqld --replicas=0
    For example:
    $ kubectl -n cluster1 scale sts ndbmysqld --replicas=0
  4. Run the following command to scale down the data nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbmtd --replicas=0
    For example:
    $ kubectl -n cluster1 scale sts ndbmtd --replicas=0
  5. Run the following command to scale down the management nodes:
    $ kubectl -n <CNDBTIER_NAMESPACE_NAME> scale sts ndbmgmd --replicas=0
    For example:
    $ kubectl -n cluster1 scale sts ndbmgmd --replicas=0

7.2 Starting or Stopping cnDBTier Georeplication Service

This section provides the procedures to start or stop cnDBTier georeplication service.

7.2.1 Starting cnDBTier Georeplication Between Sites

This section provides the procedure to start cnDBTier georeplication service using the cnDBTier switchover APIs.

To start georeplication between the two sites, perform the switchover on each site with respect to the other site.

Following are some example scenarios:

  1. In a 2 site scenario, to start the georeplication between site 1 and site 2, perform the following steps:
    1. Start switchover on site 1 with respect to site 2.
    2. Start switchover on site 2 with respect to site 1.
  2. In a 3 site scenario, to start the georeplication between site 1, site 2, and site 3, perform the following steps:
    1. Start switchover on site 1 with respect to site 2.
    2. Start switchover on site 1 with respect to site 3.
    3. Start switchover on site 2 with respect to site 1.
    4. Start switchover on site 2 with respect to site 3.
    5. Start switchover on site 3 with respect to site 1.
    6. Start switchover on site 3 with respect to site 2.
  3. In a 4 site scenario, to start georeplication between site 1, site 2, site3, and site4, perform the following steps:
    1. Start switchover on site 1 with respect to site 2.
    2. Start switchover on site 1 with respect to site 3.
    3. Start switchover on site 1 with respect to site 4.
    4. Start switchover on site 2 with respect to site 1.
    5. Start switchover on site 2 with respect to site 3.
    6. Start switchover on site 2 with respect to site 4.
    7. Start switchover on site 3 with respect to site 1.
    8. Start switchover on site 3 with respect to site 2.
    9. Start switchover on site 3 with respect to site 4.
    10. Start switchover on site 4 with respect to site 1.
    11. Start switchover on site 4 with respect to site 2.
    12. Start switchover on site 4 with respect to site 3.

cnDBTier provides a REST API to start the switchover which can be used to make different API calls depending on the requirement:

Note:

Ensure that you perform the switchover on all the sites to start the replication. For example, if you want to start the georeplication between site 1 and site 2, switchover API call on both site 1 and site 2.

Use any one of the following APIs based on your requirement:

  1. PUT /ocdbtier/georeplication/switchover/start/sitename/{siteName}: This call allows you to start the replication switchover on a site (siteName) with respect to all the other sites.
  2. PUT /ocdbtier/georeplication/switchover/start/sitename/{siteName}/remotesitename/{remoteSiteName}: This call allows you to start the replication switchover on a site (siteName) with respect to a remote site (remoteSiteName).
  3. PUT /ocdbtier/georeplication/switchover/start/sitename/{siteName}/remotesitename/{remoteSiteName}/replchannelgroupid/{replChannelGroupId}: This call allows you to start the replication switchover on a site (siteName) with respect to a remote site (remoteSiteName) for a channel group (replChannelGroupId).
To start the cnDBTier georeplication service using the switchover API, perform the following:
  1. Run the following command to get the LoadBalancer IP of the replication service on the site:
    $ IP=$(kubectl get svc -n <namespace> | grep repl | awk '{print $4}' | head -n 1 )

    where, <namespace> is the namespace of the failed site.

    For example, run the following command to get the LoadBalancer IP of the replication service on cluster1:
    $ IP=$(kubectl get svc -n cluster1 | grep repl | awk '{print $4}' | head -n 1 )
  2. Run the following command to get the LoadBalancer Port of the replication service on the site:
    $ PORT=$(kubectl get svc -n <namespace> | grep repl | awk '{print $5}' |  cut -d '/' -f 1 |  cut -d ':' -f 1 | head -n 1)

    where, <namespace> is the namespace of the failed site.

    For example, run the following command to get the LoadBalancer port of the replication service on cluster1:
    $ PORT=$(kubectl get svc -n cluster1 | grep repl | awk '{print $5}' |  cut -d '/' -f 1 |  cut -d ':' -f 1 | head -n 1)
  3. To Start the replication service in cnDBTier with respect to siteName:

    Run the following command to start the replication service in cnDBTier with respect to siteName:

    Note:

    Replace $IP and $PORT in the command with the LoadBalancer IP and port numbers obtained from steps 1 and 2.
    $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/start/sitename/{siteName}
    For example, run the following command to start the replication service in cnDBTier with respect to cluster1 on all the other mate sites:
    $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/start/sitename/cluster1
    Sample output:
    {
        "replicationSwitchOver":"start"
    }
  4. To Start the replication service in cnDBTier with respect to siteName and remoteSiteName:

    Run the following command to start the replication service in cnDBTier with respect to siteName and remoteSiteName:

    Note:

    Replace $IP and $PORT in the command with the LoadBalancer IP and port numbers obtained from steps 1 and 2.
    $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/start/sitename/{siteName}/remotesitename/{remoteSiteName}
    For example, run the following command to start the replication service in cnDBTier with respect to cluster1 and cluster2:
    $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/start/sitename/cluster1/remotesitename/cluster2
    Sample output:
    {
        "replicationSwitchOver":"start"
    }
  5. To Start the replication service in cnDBTier with respect to siteName and remoteSiteName:

    Run the following command to start the replication service in cnDBTier with respect to siteName, remoteSiteName, and replChannelGroupId only:

    Note:

    Replace $IP and $PORT in the command with the LoadBalancer IP and port numbers obtained from steps 1 and 2.
    $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/start/sitename/{siteName}/remotesitename/{remoteSiteName}/replchannelgroupid/{replChannelGroupId}

    For example, run the following command to start the replication service in cnDBTier with respect to cluster1, cluster2, and replication channel group ID "1":

    $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/start/sitename/cluster1/remotesitename/cluster2/replchannelgroupid/1
    Sample output:
    {
        "replicationSwitchOver":"start"
    }

Note:

For more information about the API response payloads, error codes, and curl commands for starting georeplication service, see the cnDBTier Switchover APIs section.

Example scenario: 3 site setup (site 1, site 2, and site 3)

The following example provides the sample steps to restart replication in a three-site setup (site 1, site 2, site 3) considering the following scenarios:

Following are assumptions for this scenario:

  • Georeplication must be restarted to and from site 3.
  • Georeplication between site1 and site2 is up and running successfully.
Procedure to restart the replication in a 3 site setup:
  1. Perform step 1 and step 2 to get the site IP and PORT details. This example considers the following site details.
    • Site 1:
      • site name: cluster 1
      • site1-site3-replication-svc Loadbalancer service IP: 10.233.51.13
      • site1-site3-replication-svc Loadbalancer service PORT: 80
    • Site 2:
      • Site name: cluster2
      • site2-site3-replication-svc Loadbalancer service IP: 10.233.52.23
      • site2-site3-replication-svc Loadbalancer service PORT: 80
    • Site 3:
      • Site name: cluster3
      • site3-site1-replication-svc Loadbalancer service IP: 10.233.53.31
      • site3-site1-replication-svc Loadbalancer service PORT: 80
  2. Note:

    Perform the steps 2 to 4 from a bash shell in any db-replication-pod.
    Start the replication switchover on site 3 with respect to other sites, that is, site 1 and site 2:
    curl -X PUT http://10.233.53.31:80/ocdbtier/georeplication/switchover/start/sitename/cluster3
  3. Start the replication switchover on site 1 with respect to site 3:
    curl -X PUT http://10.233.51.13:80/ocdbtier/georeplication/switchover/start/sitename/cluster1/remotesitename/cluster3
  4. Start the replication switchover on site 2 with respect to site 3:
    curl -X PUT http://10.233.52.23:80/ocdbtier/georeplication/switchover/start/sitename/cluster2/remotesitename/cluster3
  5. Verify if the replication to and from site 3 has started.

Note:

This procedure can be extended to a 4 site setup scenario.

7.2.2 Stopping cnDBTier Georeplication Between Sites

This section provides the procedure to stop the cnDBTier georeplication service using cnDBTier switchover and stop replica APIs.

To stop the georeplication between the two sites, perform the stop switchover and replica on each site with respect to the other site.

Following are some example scenarios:

  1. In a 2 site setup, to stop the georeplication between site 1 and site 2, perform the following steps:

    Site 1:

    • Stop the switchover on site 1 with respect to site 2.
    • Stop the replica on site 1 with respect to site 2.

    Site 2:

    • Stop the switchover on site 2 with respect to site 1.
    • Stop the replica on site 2 with respect to site 1.
  2. In a 3 site setup, to stop the georeplication between site 1, site 2, and site 3, perform the following steps:

    Site 1:

    • Stop the switchover on site 1 with respect to site 2.
    • Stop the replica on site 1 with respect to site 2.
    • Stop the switchover on site 1 with respect to site 3.
    • Stop the replica on site 1 with respect to site 3.

    Site 2:

    • Stop the switchover on site 2 with respect to site 1.
    • Stop the replica on site 2 with respect to site 1.
    • Stop the switchover on site 2 with respect to site 3.
    • Stop the replica on site 2 with respect to site 3.

    Site 3:

    • Stop the switchover on site 3 with respect to site 1.
    • Stop the replica on site 3 with respect to site 1.
    • Stop the switchover on site 3 with respect to site 2.
    • Stop the replica on site 3 with respect to site 2.
  3. In a 4 site setup, to stop the georeplication between site 1 and site 2 and site3 and site4, you must perform the following steps:

    Site 1:

    • Stop the switchover on site 1 with respect to site 2.
    • Stop the replica on site 1 with respect to site 2.
    • Stop the switchover on site 1 with respect to site 3.
    • Stop the replica on site 1 with respect to site 3.
    • Stop the switchover on site 1 with respect to site 4.
    • Stop the replica on site 1 with respect to site 4.

    Site 2:

    • Stop the switchover on site 2 with respect to site 1.
    • Stop the replica on site 2 with respect to site 1.
    • Stop the switchover on site 2 with respect to site 3.
    • Stop the replica on site 2 with respect to site 3.
    • Stop the switchover on site 2 with respect to site 4.
    • Stop the replica on site 2 with respect to site 4.

    Site 3:

    • Stop the switchover on site 3 with respect to site 1.
    • Stop the replica on site 3 with respect to site 1.
    • Stop the switchover on site 3 with respect to site 2.
    • Stop the replica on site 3 with respect to site 2.
    • Stop the switchover on site 3 with respect to site 4.
    • Stop the replica on site 3 with respect to site 4.

    Site 4:

    • Stop the switchover on site 4 with respect to site 1.
    • Stop the replica on site 4 with respect to site 1.
    • Stop the switchover on site 4 with respect to site 2.
    • Stop the replica on site 4 with respect to site 2.
    • Stop the switchover on site 4 with respect to site 3.
    • Stop the replica on site 4 with respect to site 3.

cnDBTier provides two REST APIs to stop the switchover and to stop the replica. These APIs can be used to make different API calls depending on your requirement:

Note:

Ensure that you perform the switchover on all the sites to stop the replication. For example, if you want to stop the georeplication between site 1 and site 2,stop the switchover and stop the replica using the API on both site 1 and site 2.

Use one of the following APIs to stop the switchover and stop the replica on both site 1 and site 2 based on the requirement:

  1. PUT /ocdbtier/georeplication/switchover/stop/sitename/{siteName}: This call allows you to stop replication switchover on a site (siteName) with respect to all the other mate sites.

    PUT /ocdbtier/georeplication/stopreplica/sitename/{siteName}: This call allows you to stop replica on a site (siteName) with respect to all the other mate sites.

  2. PUT /ocdbtier/georeplication/switchover/stop/sitename/{siteName}/remotesitename/{remoteSiteName}: This call allows you to stop replication switchover on a site (siteName) with respect to a remote site (remoteSiteName).

    PUT /ocdbtier/georeplication/stopreplica/sitename/{siteName}/remotesitename/{remoteSiteName}: This call allows you to stop replica on a site (siteName) with respect to a remote site (remoteSiteName).

  3. PUT /ocdbtier/georeplication/switchover/stop/sitename/{siteName}/remotesitename/{remoteSiteName}/replchannelgroupid/{replChannelGroupId}: This call allows you to stop replication switchover on a site (siteName) with respect to a remote site (remoteSiteName) for a channel group (replChannelGroupId).

    PUT /ocdbtier/georeplication/stopreplica/sitename/{siteName}/remotesitename/{remoteSiteName}/replchannelgroupid/{replChannelGroupId}: This call allows you to stop replica on a site (siteName) with respect to a remote site (remoteSiteName) for a channel group (replChannelGroupId).

Perform the following steps to stop the cnDBTier georeplication service using the stop replication APIs:
  1. Perform the following steps to stop the cnDBTier replication service switchover and stop the replication channel:
    1. Run the following command to get the LoadBalancer IP of the replication service on the site:
      $ IP=$(kubectl get svc -n <namespace> | grep repl | awk '{print $4}' | head -n 1 )

      where, <namespace> is the namespace of the failed site.

      For example, run the following command to get the LoadBalancer IP of the replication service on cluster1:
      $ IP=$(kubectl get svc -n cluster1 | grep repl | awk '{print $4}' | head -n 1 )
    2. Run the following command to get the LoadBalancer Port of the replication service on the site:
      $ PORT=$(kubectl get svc -n <namespace> | grep repl | awk '{print $5}' |  cut -d '/' -f 1 |  cut -d ':' -f 1 | head -n 1)

      where, <namespace> is the namespace of the failed site.

      For example, run the following command to get the LoadBalancer Port of the replication service on cluster1:
      $ PORT=$(kubectl get svc -n cluster1 | grep repl | awk '{print $5}' |  cut -d '/' -f 1 |  cut -d ':' -f 1 | head -n 1)
    3. Run the following command to stop the replication service switchover and stop the replication channel in cnDBTier with respect to siteName:

      Note:

      Replace $IP and $PORT in the command with the LoadBalancer IP and port numbers obtained from steps 1 and 2.
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/stop/sitename/{siteName}
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/stopreplica/sitename/{siteName}
      For example, run the following command to stop the replication service switchover and stop the replication channel in cnDBTier with respect to cluster1:
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/stop/sitename/cluster1
      Sample output:
      {
          "replicationSwitchOver":"stop"
      }

      Run the following command to stop the replication channel in cnDBTier with respect to cluster 1:

      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/stopreplica/sitename/cluster1
      Sample output:
      {
          "stopReplica":"stop"
      }
      Alternatively, run the following command to stop the replication service switchover and stop the replication channel in cnDBTier with respect to siteName and remoteSiteName:

      Note:

      Replace $IP and $PORT in the command with the LoadBalancer IP and port numbers obtained from steps 1 and 2.
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/stop/sitename/{siteName}/remotesitename/{remoteSiteName}
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/stopreplica/sitename/{siteName}/remotesitename/{remoteSiteName}
      For example, run the following command to stop the replication service switchover and stop the replication channel in cnDBTier with respect to cluster1 and cluster2:
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/stop/sitename/cluster1/remotesitename/cluster2
      Sample output:
      {
          "replicationSwitchOver":"stop"
      }
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/stopreplica/sitename/cluster1/remotesitename/cluster2
      Sample output:
      {
          "stopReplica":"stop"
      }
      Run the following command to stop the replication service switchover in cnDBTier with respect to siteName, remoteSiteName, and replChannelGroupId:

      Note:

      Replace $IP and $PORT in the command with the LoadBalancer IP and port numbers obtained from steps 1 and 2.
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/stop/sitename/{siteName}/remotesitename/{remoteSiteName}/replchannelgroupid/{replChannelGroupId}
       $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/stopreplica/sitename/{siteName}/remotesitename/{remoteSiteName}/replchannelgroupid/{replChannelGroupId}
      For example, run the following command to stop the replication service switchover in cnDBTier with respect to cluster1, cluster2, and replication channel group ID "1":
      $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/switchover/stop/sitename/cluster1/remotesitename/cluster2/replchannelgroupid/1
      Sample output:
      {
          "replicationSwitchOver":"stop"
      }
       $ curl -X PUT http://$IP:$PORT/ocdbtier/georeplication/stopreplica/sitename/cluster1/remotesitename/cluster2/replchannelgroupid/1
      Sample output:
      {
          "stopReplica":"stop"
          }

    Note:

    For more information about the API response payloads, error codes, and curl commands to stop replication service switchover and stop the replication channel, see cnDBTier Switchover APIs and cnDBTier Stop Replica APIs.

Example scenario: 3 site setup (site 1, site 2, site 3)

The following example provides the sample steps to stop replication in a three-site setup (site 1, site 2, site 3) considering the following scenarios:

Following are assumptions for this scenario:

  • Georeplication to and from site 3 must be stopped.
  • Georeplication between site1 and site2 must continue to run successfully.
Procedure to stop the replication in a 3 site setup
  1. Perform steps 1a and 1b, or steps 2a and 2b to get the site IP and PORT details. This example considers the following site details.
    • Site 1:
      • site name: cluster 1
      • site1-site3-replication-svc Loadbalancer service IP: 10.233.51.13
      • site1-site3-replication-svc Loadbalancer service PORT: 80
    • Site 2:
      • Site name: cluster2
      • site2-site3-replication-svc Loadbalancer service IP: 10.233.52.23
      • site2-site3-replication-svc Loadbalancer service PORT: 80
    • Site 3:
      • Site name: cluster3
      • site3-site1-replication-svc Loadbalancer service IP: 10.233.53.31
      • site3-site1-replication-svc Loadbalancer service PORT: 80
  2. Note:

    Perform the steps 2 to 4 from a bash shell in any db-replication-pod.

    Stop the replication switchover and stop the replica on site 3 with respect to the other sites, that is, site 1 and site 2.

    2:
    curl -X PUT http://10.233.53.31:80/ocdbtier/georeplication/switchover/stop/sitename/cluster3
    {
    	"replicationSwitchOver":"stop"
    	}
    curl -X PUT http://10.233.53.31:80/ocdbtier/georeplication/stopreplica/sitename/cluster3
    {
        "stopReplica":"stop"
    }
  3. Stop the replication switchover and stop the replica on site 1 with respect to site 3:
    curl -X PUT http://10.233.51.13:80/ocdbtier/georeplication/switchover/stop/sitename/cluster1/remotesitename/cluster3
    {
    	"replicationSwitchOver":"stop"
    	}
    curl -X PUT http://10.233.51.13:80/ocdbtier/georeplication/stopreplica/sitename/cluster1/remotesitename/cluster3
    {
        "stopReplica":"stop"
    }
  4. Stop replication switchover and Stop replica on site 2 with respect to site 3:
    curl -X PUT http://10.233.52.23:80/ocdbtier/georeplication/switchover/stop/sitename/cluster2/remotesitename/cluster3
    {"replicationSwitchOver":"stop"}
    curl -X PUT http://10.233.52.23:80/ocdbtier/georeplication/stopreplica/sitename/cluster2/remotesitename/cluster3
    {
        "stopReplica":"stop"
    }
  5. Verify if the replication to and from site 3 has stopped.

    Note:

    This procedure can be extended to a four site setup scenario.

7.3 Scaling cnDBTier Pods

cnDBTier 25.1.103 supports horizontal and vertical scaling of pods for improved performance. This section describes the horizontal and vertical scaling procedures for SQL and database pods.

7.3.1 Horizontal Scaling

This section describes the procedures to horizontally scale ndbappmysqld and ndbmtd pods.

7.3.1.1 Scaling ndbappmysqld Pods
cnDBTier 25.1.103 supports autoscaling of cnDBTier ndbappmysqld pods when cnDBTier is deployed with a service account. For more information, see Scaling ndbappmysqld Pods With Service Account. When cnDBTier is deployed without a service account, you must manually scale the ndbappmysqld pods. This section describes the procedures to scale the cnDBTier ndbappmysqld pods when cnDBTier is deployed without service account and configurations to consider when cnDBTier is deployed with service account.

Note:

Before scaling, ensure that the worker nodes have adequate resources to support scaling.
Considerations

The examples in these procedures consider a cnDBTier setup deployed with two ndbmgmd, two ndbmtd, two ndbmysqld, and two ndbappmysqld pods. The ndbappmysqld pods of this cnDBtier setup are scaled up from replica count two to four in this procedure.

7.3.1.1.1 Scaling ndbappmysqld Pods Without Service Account
This section describes the procedure to manually scale the cnDBTier ndbappmysqld pods when cnDBTier is deployed without service account.

Note:

ndbappmysqld auto scaling requires service account. If the cnDBTier is deployed without the service account, then auto scaling is disabled.
  1. Increase the replica count of the ndbappmysqld pods in the custom_values.yaml file under global.ndbappReplicaCount:
    global:
      ndbappReplicaCount: 4

    Note:

    The cnDBTier is initially deployed with two ndbappmysqld pods and increased to a replica count of four in this step.
  2. Upgrade cnDBTier with the new ndbappReplicaCount by following the upgrade procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    cnDBTier supports upgrade in a TLS enabled cluster in this scenario only.

    When the helm upgrade completes, the ndbappmysqld replica count is increased to the value updated in the custom_values.yaml file.

7.3.1.1.2 Scaling ndbappmysqld Pods With Service Account
This section describes the procedure to scale the cnDBTier ndbappmysqld pods when cnDBTier is deployed with service account.
Scaling cnDBTier ndbappmysqld When Autoscaling is Enabled

When cnDBTier is deployed with a service account and autoscaling is enabled, the ndbappmysqld pods are scaled automatically.

Ensure that autoscaling is enabled for cnDBTier ndbappmysqld pods in the custom_values.yaml file under autoscaling.ndbapp:
autoscaling:
  ndbapp:
    enabled: true

Note:

Apply the above change to the cnDBTier Helm chart by following the "Upgrading cnDBTier Clusters" procedure in the Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide. cnDBTier supports upgrade in a TLS enabled cluster in this scenario only.
When autoscaling is enabled, ndbappmysqld is scaled between ndbappReplicaCount and ndbappReplicaMaxCount according to the CPU and RAM usage defined in the custom_values.yaml file:
horizontalPodAutoscaler: 
  memory:
    enabled: true
    averageUtilization: 80
  cpu:
    enabled: false
    averageUtilization: 80
Scaling cnDBTier ndbappmysqld When Autoscaling is Disabled
  1. Increase the replica count of the ndbappmysqld pods in the custom_values.yaml file under global.ndbappReplicaCount:
    global:
      ndbappReplicaCount: 4

    Note:

    cnDBTier is initially deployed with two ndbappmysqld pods and increased to a replica count of four in this step.
  2. Upgrade cnDBTier with the new ndbappReplicaCount by following the upgrade procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    cnDBTier supports upgrade in a TLS enabled cluster in this scenario only.

    When the Helm upgrade completes, the ndbappmysqld replica count is increased to the value updated in the custom_values.yaml file.

7.3.1.2 Scaling ndbmtd Pods
cnDBTier supports only scaling up of data pods, and does not support scaling down of data pods. This section describes the manual procedure to scale up the cnDBTier ndbmtd pods.

Note:

  • Before scaling, ensure that the worker nodes have adequate resources to support scaling.
  • Divert the traffic during the horizontal scaling of the ndbmtd pods as the MySQL NDB cluster may go offline during the scaling process.
Considerations

The examples in this procedure consider a cnDBTier setup deployed with 2 ndbmgm, 2 ndbmtd, 2 ndbmysqld sql, and 2 ndbappmysqld pods. The ndbmtd pods of this cnDBTier setup are scaled up from replica count 2 to 4 in this procedure.

Procedure
  1. Before scaling the data pods, perform the Helm test on the cnDBTier cluster to check the health of the cluster:
    $ helm test mysql-cluster -n <namespace>
    Example:
    $ helm test mysql-cluster -n occne-cndbtier
    Sample output:
    NAME: mysql-cluster
    LAST DEPLOYED:  Mon May 20 08:10:11 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 2
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:    Mon May 20 10:03:51 2025
    Last Completed:  Mon May 20 10:04:16 2025
    Phase:          Succeeded
    You can also check if all the NDB pods are connected by checking the status of cnDBTier from the management ndb_mgm console:
    $ kubectl -n occne-cndbtier exec ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.74.65  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=2    @10.233.84.68  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.73.61  (mysql-8.4.3 ndb-8.4.3)
    id=50   @10.233.84.67  (mysql-8.4.3 ndb-8.4.3)
     
    [mysqld(API)]   10 node(s)
    id=56   @10.233.84.69  (mysql-8.4.3 ndb-8.4.3)
    id=57   @10.233.73.62  (mysql-8.4.3 ndb-8.4.3)
    id=70   @10.233.78.70  (mysql-8.4.3 ndb-8.4.3)
    id=71   @10.233.72.49  (mysql-8.4.3 ndb-8.4.3)
    id=72 (not connected, accepting connect from ndbappmysqld-2.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=73 (not connected, accepting connect from ndbappmysqld-3.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for fault recovery. You can ignore these nodes.
  2. Enable replicationskiperrors in the custom_values.yaml file and apply the changes using the Helm upgrade procedure for all cnDBTier sites.

    Note:

    Skip this step if replicationskiperrors is already enabled in the cnDBTier cluster.
    global:
      replicationskiperrors:
        enable: true
  3. Increase the replica count of the ndbmtd pod in the custom_values.yaml file:

    Note:

    • The cnDBTier is deployed with 2 ndbmtd and increased to a replica count of 4 in this step.
    • You can only scale up the ndbmtd pod by an even number of increment and can't scale down the pods.
    global:
      ndbReplicaCount: 4
  4. Run the following command to upgrade the cnDBTier to increase the ndbmtd replica count:

    Note:

    After the upgrade, the new ndbmtd pods can crash and restart. You can this error. Verify the status of the new pod before partitioning of data in step 14.
    $ helm upgrade --no-hooks --set global.updateStrategy=OnDelete mysql-cluster --namespace <namespace> occndbtier -f <path to custom_values.yaml>
    
  5. Perform the following steps to scale the ndbmtd pods back to their original replica count (the replica count that the ndbmtd pods had before performing Step 3):

    Note:

    In this case, the replica count was increased from 2 to 4 in Step 3. Therefore, in this step, scale back the replica count to 2.
    1. Run the following commands to patch the horizontal pod autoscaling (hpa):
      $ kubectl -n <namespace> patch hpa ndbmtd -p '{"spec": {"minReplicas": 2}}'
      $ kubectl -n <namespace> patch hpa ndbmtd -p '{"spec": {"maxReplicas": 2}}'
    2. Run the following command to scale down the ndbmtd sts replica count to 2:
      $ kubectl -n <namespace> scale sts ndbmtd --replicas=2
    3. After scaling down the ndbmtd sts replica count, terminate the newly added ndbmtd pods. Retain only the existing ndbmtd pods in the cluster.
  6. Once the upgrade is complete, delete all the management pods simultaneously:
    $ kubectl -n <namespace> delete pods ndbmgmd-0 ndbmgmd-1
  7. Wait for the management pods to come back to the running state and connect to the cluster. Run the following command to verify the status of the pods:
    $ kubectl -n occne-cndbtier exec ndbmgmd-0 -- ndb_mgm -e show
  8. Delete the data pods one at a time. Wait for a deleted pod to return to the running state and connect to cluster before deleting the next one:

    Note:

    Delete the data pods in descending order (ndbmtd-n, ndbmtd-(n-1), ndbmtd-(n-2), and so on).
    For example:
    1. Run the following command to delete pod ndbmtd-2:
      $ kubectl -n <namespace> delete pod ndbmtd-2
    2. Wait for ndbmtd-2 to return to the running state and connect back to the NDB cluster.
    3. Run the following command to delete pod ndbmtd-1:
      $ kubectl -n <namespace> delete pod ndbmtd-1
    4. Wait for ndbmtd-1 to return to the running state and connect back to the NDB cluster.
    5. Run the following command to delete pod ndbmtd-0:
      $ kubectl -n <namespace> delete pod ndbmtd-0
  9. Wait until all the data pods are up and running, and get connected to the NDB cluster. To check if the data pods are connected to the NDB cluster, log in to one of the management pods and check the status of the NDB cluster from the ndb_mgm console:
    $ kubectl -n occne-cndbtier exec ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     4 node(s)
    id=1    @10.233.74.65  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=2    @10.233.84.68  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
    id=3 (not connected, accepting connect from ndbmtd-2.ndbmtdsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=4 (not connected, accepting connect from ndbmtd-3.ndbmtdsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.73.61  (mysql-8.4.3 ndb-8.4.3)
    id=50   @10.233.84.67  (mysql-8.4.3 ndb-8.4.3)
     
    [mysqld(API)]   10 node(s)
    id=56   @10.233.84.69  (mysql-8.4.3 ndb-8.4.3)
    id=57   @10.233.73.62  (mysql-8.4.3 ndb-8.4.3)
    id=70   @10.233.78.70  (mysql-8.4.3 ndb-8.4.3)
    id=71   @10.233.72.49  (mysql-8.4.3 ndb-8.4.3)
    id=72 (not connected, accepting connect from ndbappmysqld-2.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=73 (not connected, accepting connect from ndbappmysqld-3.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    • The newly added data pods, node IDs 3 and 4 in the sample output, are shown as "starting, no nodegroup". You can ignore these nodes.
    • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for fault recovery. You can ignore these nodes.
  10. Delete the ndbmysqld pods one at a time. Wait for a deleted pod to return to the running state and connect to the cluster before deleting the next one:

    Note:

    Delete the pods in descending order (ndbmysqld-n, ndbmysqld-(n-1), ndbmysqld-(n-2), and so on).
    For example:
    1. Run the following command to delete pod ndbmysqld-2:
      $ kubectl -n <namespace> delete pod ndbmysqld-2
    2. Wait for ndbmysqld-2 to return to the running state and connect back to the NDB cluster.
    3. Run the following command to delete pod ndbmysqld-1:
      $ kubectl -n <namespace> delete pod ndbmysqld-1
    4. Wait for ndbmysqld-1 to return to the running state and connect back to the NDB cluster.
    5. Run the following command to delete pod ndbmysqld-0:
      $ kubectl -n <namespace> delete pod ndbmysqld-0
  11. Wait for the ndbmysqld pods to return to running state and connect to the NDB cluster. To check if the data pods are connected to the NDB cluster, log in to one of the management pods and check the status of the NDB cluster from the ndb_mgm console:
    $ kubectl -n occne-cndbtier exec -it ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     4 node(s)
    id=1    @10.233.74.65  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=2    @10.233.84.68  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
    id=3 (not connected, accepting connect from ndbmtd-2.ndbmtdsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=4 (not connected, accepting connect from ndbmtd-3.ndbmtdsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.73.61  (mysql-8.4.3 ndb-8.4.3)
    id=50   @10.233.84.67  (mysql-8.4.3 ndb-8.4.3)
     
    [mysqld(API)]   10 node(s)
    id=56   @10.233.84.69  (mysql-8.4.3 ndb-8.4.3)
    id=57   @10.233.73.62  (mysql-8.4.3 ndb-8.4.3)
    id=70   @10.233.78.70  (mysql-8.4.3 ndb-8.4.3)
    id=71   @10.233.72.49  (mysql-8.4.3 ndb-8.4.3)
    id=72 (not connected, accepting connect from ndbappmysqld-2.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=73 (not connected, accepting connect from ndbappmysqld-3.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for fault recovery. You can ignore these nodes.
  12. Delete the ndbappmysqld pods one at a time. Wait for a deleted pod to return back to the running state before deleting the next pod.

    Note:

    Delete the pods in descending order (ndbappmysqld-n, ndbappmysqld-(n-1), ndbappmysqld-(n-2), and so on).
    For example:
    1. Run the following command to delete pod ndbappmysqld-2:
      $ kubectl -n <namespace> delete pod ndbappmysqld-2
    2. Wait for ndbappmysqld-2 to return to the running state and connect back to the NDB cluster.
    3. Run the following command to delete pod ndbappmysqld-1:
      $ kubectl -n <namespace> delete pod ndbappmysqld-1
    4. Wait for ndbappmysqld-1 to return to the running state and connect back to the NDB cluster.
    5. Run the following command to delete pod ndbappmysqld-0:
      $ kubectl -n <namespace> delete pod ndbappmysqld-0
  13. Wait for the ndbappmysqld pods to return to the running state and connect to the NDB cluster. To check if the data pods are connected to the NDB cluster, log in to one of the management pods and check the status of the NDB cluster from the ndb_mgm console:
    $ kubectl -n occne-cndbtier exec -it ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     4 node(s)
    id=1    @10.233.74.65  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=2    @10.233.84.68  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
    id=3 (not connected, accepting connect from ndbmtd-2.ndbmtdsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=4 (not connected, accepting connect from ndbmtd-3.ndbmtdsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.73.61  (mysql-8.4.3 ndb-8.4.3)
    id=50   @10.233.84.67  (mysql-8.4.3 ndb-8.4.3)
     
    [mysqld(API)]   10 node(s)
    id=56   @10.233.84.69  (mysql-8.4.3 ndb-8.4.3)
    id=70   @10.233.78.70  (mysql-8.4.3 ndb-8.4.3)
    id=71   @10.233.72.49  (mysql-8.4.3 ndb-8.4.3)
    id=72 (not connected, accepting connect from ndbappmysqld-2.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=73 (not connected, accepting connect from ndbappmysqld-3.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for fault recovery. You can ignore these nodes.
  14. Perform the following steps to scale up the ndbmtd pods to their increased replica count. That is, the replica count configured at Step 3:

    Note:

    In this case, the replica count was increased to 4 in Step 3. Therefore, in this step, scale back the replica count to 4.
    1. Run the following commands to patch horizontal pod autoscaler (hpa):
      $ kubectl -n <namespace> patch hpa ndbmtd -p '{"spec": {"maxReplicas": 4}}'
      $ kubectl -n <namespace> patch hpa ndbmtd -p '{"spec": {"minReplicas": 4}}'
    2. Scale up the ndbmtd sts replica count to 4:
      $ kubectl -n <namespace> scale sts ndbmtd --replicas=4
    3. Wait for the ndbmtd pods to return to the running state and connect to the NDB cluster. To check if the data pods are connected to the NDB cluster, log in to one of the management pods and check the status of the NDB cluster from the ndb_mgm console:
      $ kubectl -n occne-cndbtier exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186
      Cluster Configuration
      ---------------------
      [ndbd(NDB)]     4 node(s)
      id=1    @10.233.74.65  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.84.68  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
      id=3    @10.233.117.60  (mysql-8.4.3 ndb-8.4.3, starting, no nodegroup)
      id=4    @10.233.111.64  (mysql-8.4.3 ndb-8.4.3, starting, no nodegroup)
       
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.73.61  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.84.67  (mysql-8.4.3 ndb-8.4.3)
       
      [mysqld(API)]   10 node(s)
      id=56   @10.233.84.69  (mysql-8.4.3 ndb-8.4.3)
      id=57   @10.233.73.62  (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.78.70  (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.72.49  (mysql-8.4.3 ndb-8.4.3)
      id=72 (not connected, accepting connect from ndbappmysqld-2.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
      id=73 (not connected, accepting connect from ndbappmysqld-3.ndbappmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host)

      Note:

      • When the newly added ndbmtd pods are connected to the NDB cluster, they are shown in the "no nodegroup" state.
      • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for fault recovery. You can ignore these nodes.
  15. Run the following command in the NDB Cluster management client to create the new node group. Node ID for the newly added ndbmtd pod is <sequence no of the ndbmtd pod + 1>.
    ndb_mgm>  CREATE NODEGROUP <node_id_for_pod_ndbmtd-2>,<node_id_for_pod_ndbmtd-3>
    For example:
    ndb_mgm>  CREATE NODEGROUP <(sequence no of pod ndbmtd-2) + 1>,<(sequence no of pod ndbmtd-3) + 1>
     
    [mysql@ndbmgmd-0 ~]$ ndb_mgm
    -- NDB Cluster -- Management Client --
    ndb_mgm>  CREATE NODEGROUP 3,4
    
    Sample output:
    Connected to Management Server at: localhost:1186
    Nodegroup 1 created

    Note:

    • This example considers that cnDBTier is initially created with two data pods (node IDs 1 and 2). In this step, the data pod count is scaled up from 2 to 4. Therefore, the node IDs 3 and 4 are assigned to the newly created data nodes.
    • If you are adding more than two ndbmtd nodes to the cluster, then create the node groups for the first two nodes, then for the next pair, and so on. For example, if you are adding four ndbmtd nodes with node IDs x1, x2, x3, x4, then first create node groups for x1 and x2 and then for x3 and x4 as shown in the following code block:
      ndb_mgm> CREATE NODEGROUP x1, x2
      ndb_mgm> CREATE NODEGROUP x3, x4
  16. Depending on the following conditions, use georeplication recovery procedure or the dbt_reorg_table_partition script to redistribute the cluster data:
    • If the mate site is available, then perform the georeplication recovery procedure to redistribute the data across all the data nodes (including the new data nodes). For georeplication procedure, see the "Restoring Georeplication Failure" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    • If the mate site is not available, perform the following steps to run the dbt_reorg_table_partition script to redistribute the data across all the data nodes (including the new data nodes).
    • If the mate site is available, however you don't want to perform the georeplication recovery, then perform the following steps to run the dbt_reorg_table_partition script to redistribute the data across all the data nodes (including the new data nodes).

    Note:

    When data nodes restart or a database backup is ongoing while running the dbt_reorg_table_partition script, rerun the script after the restart or backup is complete to ensure proper reorganization of all table partitions across all data nodes.
    1. Run the following commands to source the source_me file. This adds the bin directory with the program to the user PATH and prompts you to enter the namespace. The namespace is used to set DBTIER_NAMESPACE. Additionally, it sets the DBTIER_LIB environment variable to the path of the directory containing the libraries needed by dbt_reorg_table_partition:
      $ cd Artifacts/Scripts/tools
      $ source ./source_me
      
      Sample output:
      NOTE: source_me must be sourced while your current directory is the directory with the source_me file.
      
      Enter cndbtier namespace: sitea
      DBTIER_NAMESPACE = "sitea"
      
      DBTIER_LIB=/home/cloud-user/user/deploy_cndbtier_sitea_25.1.103/Artifacts/Scripts/tools/dbtier/lib
      
      Adding /home/cloud-user/user/deploy_cndbtier_sitea_25.1.103/Artifacts/Scripts/tools/dbtier/bin to PATH
      PATH=/home/cloud-user/.local/bin:/home/cloud-user/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/var/occne/cluster/occne3-user/artifacts/istio-1.15.3/bin/:/var/occne/cluster/occne3-user/artifacts:/home/cloud-user/user/deploy_cndbtier_sitea_25.1.103/Artifacts/Scripts/tools/dbtier/bin
    2. Run the dbt_reorg_table_partition script:
      $ dbt_reorg_table_partition
      Sample output:
      dbt_reorg_table_partition 25.1.103
      Copyright (c) 2024 Oracle and/or its affiliates. All rights reserved.
      2024-01-17T05:11:45Z INFO - Getting sts and sts pod info...
      2024-01-17T05:11:45Z INFO - Getting MGM sts and sts pod info...
      2024-01-17T05:11:45Z INFO - MGM_STS="ndbmgmd"
      2024-01-17T05:11:45Z INFO - MGM_REPLICAS="2"
      2024-01-17T05:11:45Z INFO - MGM_PODS:
          ndbmgmd-0
          ndbmgmd-1
      2024-01-17T05:11:45Z INFO - Getting NDB sts and sts pod info...
      2024-01-17T05:11:45Z INFO - NDB_STS="ndbmtd"
      2024-01-17T05:11:45Z INFO - NDB_REPLICAS="2"
      2024-01-17T05:11:45Z INFO - NDB_PODS:
          ndbmtd-0
          ndbmtd-1
      2024-01-17T05:11:45Z INFO - Getting API sts and sts pod info...
      2024-01-17T05:11:45Z INFO - API_STS="ndbmysqld"
      2024-01-17T05:11:45Z INFO - API_REPLICAS="2"
      2024-01-17T05:11:45Z INFO - API_PODS:
          ndbmysqld-0
          ndbmysqld-1
      2024-01-17T05:11:45Z INFO - Getting APP sts and sts pod info...
      2024-01-17T05:11:45Z INFO - APP_STS="ndbappmysqld"
      2024-01-17T05:11:45Z INFO - APP_REPLICAS="2"
      2024-01-17T05:11:45Z INFO - APP_PODS:
          ndbappmysqld-0
          ndbappmysqld-1
      2024-01-17T05:11:45Z INFO - Getting deployment pod info...
      2024-01-17T05:11:45Z INFO - grepping for backup-man (BAK_CHART_NAME)...
      2024-01-17T05:11:46Z INFO - BAK_PODS:
          mysql-cluster-db-backup-manager-svc-57fb8ff49c-49p55
      2024-01-17T05:11:46Z INFO - BAK_DEPLOY:
          mysql-cluster-db-backup-manager-svc
      2024-01-17T05:11:46Z INFO - grepping for db-mon (MON_CHART_NAME)...
      2024-01-17T05:11:46Z INFO - MON_PODS:
          mysql-cluster-db-monitor-svc-7b7559cd45-shm8r
      2024-01-17T05:11:46Z INFO - MON_DEPLOY:
          mysql-cluster-db-monitor-svc
      2024-01-17T05:11:46Z INFO - grepping for repl (REP_CHART_NAME)...
      2024-01-17T05:11:46Z INFO - REP_PODS:
          mysql-cluster-siteb-local-replication-svc-9c4c59f87-6zl57
      2024-01-17T05:11:46Z INFO - REP_DEPLOY:
          mysql-cluster-siteb-local-replication-svc
      2024-01-17T05:11:46Z INFO - Reorganizing table partitions...
      mysql.ndb_apply_status  optimize        status  OK
      backup_info.DBTIER_BACKUP_INFO  optimize        status  OK
      backup_info.DBTIER_BACKUP_COMMAND_QUEUE optimize        status  OK
      backup_info.DBTIER_BACKUP_TRANSFER_INFO optimize        status  OK
      mysql.ndb_replication   optimize        status  OK
      hbreplica_info.DBTIER_HEARTBEAT_INFO    optimize        status  OK
      replication_info.DBTIER_SITE_INFO       optimize        status  OK
      replication_info.DBTIER_REPL_SITE_INFO  optimize        status  OK
      replication_info.DBTIER_REPLICATION_CHANNEL_INFO        optimize        status  OK
      replication_info.DBTIER_INITIAL_BINLOG_POSTION  optimize        status  OK
      replication_info.DBTIER_REPL_ERROR_SKIP_INFO    optimize        status  OK
      replication_info.DBTIER_REPL_EVENT_INFO optimize        status  OK
      replication_info.DBTIER_REPL_SVC_INFO   optimize        status  OK
      2024-01-17T05:12:10Z INFO - Reorganized Tables Partition Successfully
  17. Run the following command in the management client to check if the data is distributed:
    ndb_mgm> ALL REPORT MEMORY
    For example:
    ndb_mgm> ALL REPORT MEMORY
    Sample output:
    Connected to Management Server at: localhost:1186
    Node 1: Data usage is 0%(58 32K pages of total 12755)
    Node 1: Index usage is 0%(45 32K pages of total 12742)
    Node 2: Data usage is 0%(58 32K pages of total 12755)
    Node 2: Index usage is 0%(45 32K pages of total 12742)
    Node 3: Data usage is 0%(15 32K pages of total 12780)
    Node 3: Index usage is 0%(20 32K pages of total 12785)
    Node 4: Data usage is 0%(15 32K pages of total 12780)
    Node 4: Index usage is 0%(20 32K pages of total 12785)
  18. Run the Helm test on the cnDBTier cluster to check the health of the cluster:
    $ helm test mysql-cluster -n occne-cndbtier
    Sample output:
    NAME: mysql-cluster
    LAST DEPLOYED:  Mon May 20 08:10:11 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 2
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:    Mon May 20 10:27:49 2025
    Last Completed:  Mon May 20 10:28:11 2025
    Phase:          Succeeded
    You can also verify the status of the cnDBTier from the ndb_mgm console by running the following command:
    $ kubectl -n occne-cndbtier exec ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     4 node(s)
    id=1    @10.233.92.53  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=2    @10.233.72.66  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=3    @10.233.117.60  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 1)
    id=4    @10.233.111.64  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 1, *)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.111.62  (mysql-8.4.3 ndb-8.4.3)
    id=50   @10.233.117.59  (mysql-8.4.3 ndb-8.4.3)
     
    [mysqld(API)]   8 node(s)
    id=56   @10.233.72.65  (mysql-8.4.3 ndb-8.4.3)
    id=57   @10.233.92.51  (mysql-8.4.3 ndb-8.4.3)
    id=70   @10.233.72.64  (mysql-8.4.3 ndb-8.4.3)
    id=71   @10.233.92.52  (mysql-8.4.3 ndb-8.4.3)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for fault recovery. You can ignore these nodes.
  19. Disable replicationskiperrors in the custom_values.yaml file and apply the changes using the Helm upgrade procedure.

    Note:

    Skip this step if you skipped Step 2.
    global:
      replicationskiperrors:
        enable: false
  20. Run the following command to patch the ndbmtd, ndbmysqld, and ndbappmysqld statefulsets and update updateStrategy to RollingUpdate:
    $ kubectl -n <namespace> patch sts <ndbmtd sts name> -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}' 
     
    $ kubectl -n <namespace> patch sts <ndbmysqld sts name> -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'  
     
    $ kubectl -n <namespace> patch sts <ndbappmysqld sts name> -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'  

7.3.2 Vertical Scaling

Currently, cnDBTier supports only manual vertical scaling of ndbmtd and SQL pods. This section describes the procedure to manually scale ndbmtd, ndbappmysqld, and ndbmysqld pods.

7.3.2.1 Scaling ndbmtd Pods
This section provides the procedures to vertically scale up ndbmtd pods.

Note:

  • Before scaling the pods, ensure that the worker nodes have adequate resources to support scaling.
  • Perform the Helm test on the deployed cnDBTier cluster to check the health of the cluster:
    $ helm test mysql-cluster -n occne-cndbtier
    Sample output:
    NAME: mysql-cluster
    LAST DEPLOYED:  Mon May 20 03:24:03 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:    Mon May 20 04:03:01 2025
    Last Completed:  Mon May 20 04:03:26 2025
    Phase:          Succeeded
    You can also verify the status of the cnDBTier from the ndb_mgm console by running the following command:
    $ kubectl -n occne-cndbtier exec -it ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.95.87  (mysql-8.0.34 ndb-8.0.34, Nodegroup: 0, *)
    id=2    @10.233.99.254  (mysql-8.0.34 ndb-8.0.34, Nodegroup: 0)
     
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.110.157  (mysql-8.0.34 ndb-8.0.34)
    id=50   @10.233.101.213  (mysql-8.0.34 ndb-8.0.34)
     
    [mysqld(API)]   10 node(s)
    id=56   @10.233.99.243  (mysql-8.0.34 ndb-8.0.34)
    id=57   @10.233.101.221  (mysql-8.0.34 ndb-8.0.34)
    id=70   @10.233.100.228  (mysql-8.0.34 ndb-8.0.34)
    id=71   @10.233.101.217  (mysql-8.0.34 ndb-8.0.34)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)
    The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for fault recovery. You can ignore these nodes.

Updating CPU and RAM

Perform the following steps to update CPU and RAM using the custom_values.yaml file:
  1. Configure the required CPU and memory values in the global section of the custom_values.yaml file.

    For example:

    The following code block shows the old CPU and RAM values in the custom_values.yaml file

    global:
      ndb:
        datamemory: 400M
    ndb: 
      resources:
        limits:
          cpu: 1
          memory: 4Gi
        requests:
          cpu: 1
          memory: 4Gi
    The following code block shows the updated CPU and RAM values in the custom_values.yaml file:
    global:
      ndb:
        datamemory: 800M
    ndb: 
      resources:
        limits:
          cpu: 2
          memory: 8Gi
        requests:
          cpu: 2
          memory: 8Gi
  2. Upgrade cnDBTier by performing a Helm upgrade with the modified custom_values.yaml file. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    cnDBTier supports upgrade in a TLS enabled cluster in this scenario only.

Updating PVC

Updating PVC Using Helm Upgrade
Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for ndbmtd pods:

    For example:

    The following code block shows the old PVC values in the custom_values.yaml file:
    global:
      ndb:
        ndbdisksize: 3Gi
        ndbbackupdisksize: 3Gi
    The following code block shows the updated PVC values in the custom_values.yaml file:
    global:
      ndb:
        ndbdisksize: 6Gi
        ndbbackupdisksize: 6Gi
  2. Run the following command to delete the ndbmtd statefulset and set the dependency to orphan:
    $ kubectl -n occne-cndbtier delete sts --cascade=orphan ndbmtd
  3. Run the following command to delete the ndbmtd-1 pod and patch the pod's PVC values with the values updated in Step a:
    $ kubectl -n occne-cndbtier delete pod ndbmtd-1
     
    $ kubectl -n occne-cndbtier patch -p '{ "spec": { "resources": { "requests": { "storage": "6Gi" }}}}' pvc pvc-ndbmtd-ndbmtd-1
    $ kubectl -n occne-cndbtier patch -p '{ "spec": { "resources": { "requests": { "storage": "6Gi" }}}}' pvc pvc-backup-ndbmtd-ndbmtd-1
  4. After patching the PVC values, wait for the new PV to link to the PVC. Run the following command to check if the PV reflects the updated PVC values:
    $ kubectl get pv | grep -w occne-cndbtier | grep ndbmtd-1
    Sample output:
    pvc-53fe7988-4a81-40d3-a366-fd894de89535  6Gi  RWO  Delete  Bound  occne-cndbtier/pvc-ndbmtd-ndbmtd-1  occne-dbtier-sc  62m
  5. Upgrade cnDBTier with the modified custom_values.yaml file:
    $ helm upgrade mysql-cluster occndbtier -f occndbtier/custom_values.yaml -n occne-cndbtier --no-hooks
  6. Perform Step b through Step e for the ndbmtd-0 pod.

    Note:

    If you have more that two data pods, then follow Steps b to e for each ndbmtd pod in the following order: ndbmtd-n, ndbmtd-(n-1), and so on up to ndbmtd-0.
  7. As cnDBTier Helm upgrade is performed with the "--no-hooks" option in step e, upgradeStrategy of the cnDBTier StatefulSets is changed to OnDelete. Therefore, perform the cnDBTier upgrade one more time to restore upgradeStrategy to RollingRestart:

    Note:

    Before running the following command, ensure that you set the value of OCCNE_NAMESPACE variable in the command with your cnDBTier namespace.
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml
  8. Run the Helm test on the cnDBTier cluster to check the health of the cluster:
    $ helm test mysql-cluster -n occne-cndbtier
    Sample output:
    NAME: mysql-cluster
    LAST DEPLOYED:  Mon May 20 04:43:20 2025
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 4
    TEST SUITE:     mysql-cluster-node-connection-test
    Last Started:    Mon May 20 04:45:15 2025
    Last Completed:  Mon May 20 04:45:35 2025
    Phase:          Succeeded
Updating PVC Using dbtscale_vertical_pvc
dbtscale_vertical_pvc is an automated script to update PVC without manual intervention. Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for ndbmtd pods. Ensure that you only modify the global.ndb.ndbdisksize, global.ndb.ndbbackupdisksize, or both the sections.

    For example:

    The following code block shows the old PVC values in the custom_values.yaml file:
    global:
      ndb:
        ndbdisksize: 2Gi
        ndbbackupdisksize: 3Gi
    The following code block shows the updated PVC values in the custom_values.yaml file:
    global:
      ndb:
        ndbdisksize: 3Gi
        ndbbackupdisksize: 4Gi
  2. Run the following commands to navigate to the Artifacts/Scripts/tools/ directory and source the source_me file:

    Note:

    • Ensure that the Artifacts/Scripts/tools/ directory contains the source_me file.
    • Enter the namespace of the cnDBTier cluster when prompted.
    $ cd Artifacts/Scripts/tools/
    $ source ./source_me
    Sample output:
    Enter cndbtier namespace: occne-cndbtier
    DBTIER_NAMESPACE = "occne-cndbtier"
    
    DBTIER_LIB=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/lib
    
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin to PATH
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts to PATH
    PATH=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts:/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin:/home/dbtuser/.local/bin:/home/dbtuser/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
  3. Run the dbtscale_vertical_pvc script with the –pods=ndbmtd option and pass the helm charts and the updated custom_values.yaml file as parameters:
    $ dbtscale_vertical_pvc --pods=ndbmtd occndbtier custom_values_scale_pvc.yaml
    Sample output:
    2024-12-17T14:40:15Z INFO - HELM_CHARTS = occndbtier
    2024-12-17T14:40:15Z INFO - CUSTOM_VALUES_FILE = custom_values_scale_pvc.yaml
    dbtscale_vertical_pvc 25.1.100
    Copyright (c) 2024 Oracle and/or its affiliates. All rights reserved.
    2024-12-17T14:40:15Z INFO - Started timer dbtscale_vertical_pvc: 1734446415
    2024-12-17T14:40:15Z INFO - Started timer PHASE 0: 1734446415
    2024-12-17T14:40:15Z INFO - ****************************************************************************************************
    2024-12-17T14:40:15Z INFO - BEGIN PHASE 0: Collect Site information
    2024-12-17T14:40:15Z INFO - ****************************************************************************************************
    2024-12-17T14:40:15Z INFO - Using IPv4: LOOPBACK_IP="127.0.0.1"
    2024-12-17T14:40:15Z INFO - DBTIER_NAMESPACE = occne-cndbtier
    ...
    2024-12-17T14:40:26Z INFO - PODS_TO_SCALE="ndbmtd"
    2024-12-17T14:40:26Z INFO - POD_TYPE="ndb"
    2024-12-17T14:40:26Z INFO - STS_TO_DELETE="ndbmtd"
    2024-12-17T14:40:26Z INFO - PODS_TO_RESTART="NDB_PODS"
    2024-12-17T14:40:27Z INFO - REPL_LEADER_DEPLOY = mysql-cluster-site-1-site-2-replication-svc
    2024-12-17T14:40:27Z INFO - REPL_LEADER_PVC = pvc-site-1-site-2-replication-svc
    2024-12-17T14:40:27Z INFO - REPL_LEADER_POD = mysql-cluster-site-1-site-2-replication-svc-74df5fww98q
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - END PHASE 0: Collect Site information
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - Ended timer PHASE 0: 1734446427
    2024-12-17T14:40:27Z INFO - PHASE 0 took: 00 hr. 00 min. 12 sec.
    2024-12-17T14:40:27Z INFO - Started timer PHASE 1: 1734446427
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - BEGIN PHASE 1: Verify disk sizes are supported
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - Current PVC sizes:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-backup-ndbmtd-ndbmtd-0                  Bound    pvc-55d60044-2df4-47b7-92aa-4d27d166db7f   2Gi        RWO            standard       <unset>                 30h
    pvc-backup-ndbmtd-ndbmtd-1                  Bound    pvc-1d81991c-7c9a-4d79-9265-ee6c5ddaa8f2   2Gi        RWO            standard       <unset>                 30h
    pvc-ndbmtd-ndbmtd-0                         Bound    pvc-8d55a76a-17e8-47d0-bb4b-8a6c31c76c96   3Gi        RWO            standard       <unset>                 30h
    pvc-ndbmtd-ndbmtd-1                         Bound    pvc-2c6dd1ed-ab48-40f4-ad23-43d9b030f03f   3Gi        RWO            standard       <unset>                 30h
    2024-12-17T14:40:27Z INFO - Requested PVC sizes from custom_values_scale_pvc.yaml:
      ndb:
        ndbdisksize: 4Gi
        ndbbackupdisksize: 3Gi
        use_separate_backup_disk: true
    2024-12-17T14:40:27Z INFO - Current and requested disk values:
    2024-12-17T14:40:27Z INFO - current_pvcsz=3Gi (3221225472), requested_pvcsz=4Gi (4294967296)
    2024-12-17T14:40:27Z INFO - requested_use_separate_backup_disk=true
    2024-12-17T14:40:27Z INFO - current_pvcbackupsz=2Gi (2147483648), requested_pvcbackupsz=3Gi (3221225472)
    2024-12-17T14:40:27Z INFO - Verifying current and requested disk sizes...
    2024-12-17T14:40:27Z INFO - Requested PVC values should not be equal to current - PASSED
    2024-12-17T14:40:27Z INFO - No requested value should be smaller than current - PASSED
    2024-12-17T14:40:27Z INFO - No requested value should be zero or negative - PASSED
    2024-12-17T14:40:27Z INFO - At least one requested value should be larger than its current value - PASSED
    2024-12-17T14:40:27Z INFO - Should not remove backup PVC - PASSED
    2024-12-17T14:40:27Z INFO - db-replication-svc requested PVC values should equal to current - PASSED
    2024-12-17T14:40:27Z INFO - ndbmysqld requested PVC values should equal to current - PASSED
    2024-12-17T14:40:27Z INFO - ndbappmysqld requested PVC values should equal to current - PASSED
    2024-12-17T14:40:27Z INFO - Verified current and requested disk sizes.
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - END PHASE 1: Verify disk sizes are supported
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - Ended timer PHASE 1: 1734446427
    2024-12-17T14:40:27Z INFO - PHASE 1 took: 00 hr. 00 min. 00 sec.
    2024-12-17T14:40:27Z INFO - Started timer PHASE 2: 1734446427
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - BEGIN PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - Deleting STS...
    2024-12-17T14:40:27Z INFO - kubectl -n occne-cndbtier delete sts --cascade=orphan "ndbmtd"
    statefulset.apps "ndbmtd" deleted
    2024-12-17T14:40:27Z INFO - STS deleted
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - END PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - Ended timer PHASE 2: 1734446427
    2024-12-17T14:40:27Z INFO - PHASE 2 took: 00 hr. 00 min. 00 sec.
    2024-12-17T14:40:27Z INFO - Started timer PHASE 3: 1734446427
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - BEGIN PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T14:40:27Z INFO - ****************************************************************************************************
    2024-12-17T14:40:27Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 14:40:28 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 10
    2024-12-17T14:40:32Z INFO - Helm upgrade returned
    2024-12-17T14:40:32Z INFO - ****************************************************************************************************
    2024-12-17T14:40:32Z INFO - END PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T14:40:32Z INFO - ****************************************************************************************************
    2024-12-17T14:40:32Z INFO - Ended timer PHASE 3: 1734446432
    2024-12-17T14:40:32Z INFO - PHASE 3 took: 00 hr. 00 min. 05 sec.
    2024-12-17T14:40:32Z INFO - Started timer PHASE 4: 1734446432
    2024-12-17T14:40:32Z INFO - ****************************************************************************************************
    2024-12-17T14:40:32Z INFO - BEGIN PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T14:40:32Z INFO - ****************************************************************************************************
    2024-12-17T14:40:32Z INFO - Determine NDB Node President...
    2024-12-17T14:40:33Z INFO - PRESIDENT_NODE=1
    2024-12-17T14:40:33Z INFO - PRESIDENT_GROUP=0
    2024-12-17T14:40:33Z INFO - NODES_IN_PRESIDENTS_GROUP=(1 2)
    2024-12-17T14:40:33Z INFO - pod_name_witout_id = "ndbmtd-"
    2024-12-17T14:40:33Z INFO - pods_in_president_group = (ndbmtd-0 ndbmtd-1)
    2024-12-17T14:40:33Z INFO - Restart pods after deleting their PVCs...
    2024-12-17T14:40:33Z INFO - Deleting pods NOT in president's group (0)...
    2024-12-17T14:40:33Z INFO - first_set_of_pods_to_delete = ()
    2024-12-17T14:40:33Z INFO - delete_sts_pod_and_its_pvcs_one_at_a_time: nothing to do; pods array is empty ()
    2024-12-17T14:40:33Z INFO - Deleting president pod (ndbmtd-0)...
    2024-12-17T14:40:33Z INFO - president_set = (ndbmtd-0)
    2024-12-17T14:40:33Z INFO - Deleting PVCs for ndbmtd-0 (pvc-backup-ndbmtd-ndbmtd-0 pvc-ndbmtd-ndbmtd-0)...
    2024-12-17T14:40:33Z INFO - deleting pod ndbmtd-0 ...
    persistentvolumeclaim "pvc-backup-ndbmtd-ndbmtd-0" deleted
    persistentvolumeclaim "pvc-ndbmtd-ndbmtd-0" deleted
    pod "ndbmtd-0" deleted
    2024-12-17T14:40:45Z INFO - Waiting for condition: is_pod ndbmtd-0...
    2024-12-17T14:40:45Z INFO - Condition occurred
    2024-12-17T14:41:28Z INFO - Waiting for condition: is_pod_ready ndbmtd-0...
    2024-12-17T14:41:28Z INFO - Condition occurred
    2024-12-17T14:41:28Z INFO - kubectl -n occne-cndbtier get pod ndbmtd-0
    NAME       READY   STATUS    RESTARTS   AGE
    ndbmtd-0   3/3     Running   0          43s
    2024-12-17T14:41:28Z INFO - kubectl -n occne-cndbtier get pvc pvc-backup-ndbmtd-ndbmtd-0 pvc-ndbmtd-ndbmtd-0
    NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-backup-ndbmtd-ndbmtd-0   Bound    pvc-83582dba-f394-4a1e-b1b1-3a1466f643e8   3Gi        RWO            standard       <unset>                 43s
    pvc-ndbmtd-ndbmtd-0          Bound    pvc-0aa26c58-471d-4be3-b148-9c192165daf6   4Gi        RWO            standard       <unset>                 43s
    2024-12-17T14:41:28Z INFO - Deleting non-president pods in president's group (0)...
    2024-12-17T14:41:28Z INFO - pods_in_president_group_without_president = (ndbmtd-1)
    2024-12-17T14:41:28Z INFO - Deleting PVCs for ndbmtd-1 (pvc-backup-ndbmtd-ndbmtd-1 pvc-ndbmtd-ndbmtd-1)...
    2024-12-17T14:41:28Z INFO - deleting pod ndbmtd-1 ...
    persistentvolumeclaim "pvc-backup-ndbmtd-ndbmtd-1" deleted
    persistentvolumeclaim "pvc-ndbmtd-ndbmtd-1" deleted
    pod "ndbmtd-1" deleted
    2024-12-17T14:41:39Z INFO - Waiting for condition: is_pod ndbmtd-1...
    2024-12-17T14:41:39Z INFO - Condition occurred
    2024-12-17T14:42:20Z INFO - Waiting for condition: is_pod_ready ndbmtd-1...
    2024-12-17T14:42:20Z INFO - Condition occurred
    2024-12-17T14:42:20Z INFO - kubectl -n occne-cndbtier get pod ndbmtd-1
    NAME       READY   STATUS    RESTARTS   AGE
    ndbmtd-1   3/3     Running   0          41s
    2024-12-17T14:42:20Z INFO - kubectl -n occne-cndbtier get pvc pvc-backup-ndbmtd-ndbmtd-1 pvc-ndbmtd-ndbmtd-1
    NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-backup-ndbmtd-ndbmtd-1   Bound    pvc-419ea3ef-af00-4e58-a5fb-9a774c764a6e   3Gi        RWO            standard       <unset>                 41s
    pvc-ndbmtd-ndbmtd-1          Bound    pvc-fa0c7826-337d-4805-a1dd-2b6f2961270a   4Gi        RWO            standard       <unset>                 41s
    2024-12-17T14:42:20Z INFO - Pods restarted and their PVCs recreated.
    2024-12-17T14:42:20Z INFO - ****************************************************************************************************
    2024-12-17T14:42:20Z INFO - END PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T14:42:20Z INFO - ****************************************************************************************************
    2024-12-17T14:42:20Z INFO - Ended timer PHASE 4: 1734446540
    2024-12-17T14:42:20Z INFO - PHASE 4 took: 00 hr. 01 min. 48 sec.
    2024-12-17T14:42:20Z INFO - Started timer PHASE 5: 1734446540
    2024-12-17T14:42:20Z INFO - ****************************************************************************************************
    2024-12-17T14:42:20Z INFO - BEGIN PHASE 5: Upgrade cnDBTier
    2024-12-17T14:42:20Z INFO - ****************************************************************************************************
    2024-12-17T14:42:20Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 14:42:21 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 11
    2024-12-17T14:43:51Z INFO - Helm upgrade returned
    2024-12-17T14:43:51Z INFO - ****************************************************************************************************
    2024-12-17T14:43:51Z INFO - END PHASE 5: Upgrade cnDBTier
    2024-12-17T14:43:51Z INFO - ****************************************************************************************************
    2024-12-17T14:43:51Z INFO - Ended timer PHASE 5: 1734446631
    2024-12-17T14:43:51Z INFO - PHASE 5 took: 00 hr. 01 min. 31 sec.
    2024-12-17T14:43:51Z INFO - Started timer PHASE 6: 1734446631
    2024-12-17T14:43:51Z INFO - ****************************************************************************************************
    2024-12-17T14:43:51Z INFO - BEGIN PHASE 6: Post-processing
    2024-12-17T14:43:51Z INFO - ****************************************************************************************************
    2024-12-17T14:43:51Z INFO - CURRENT_PVCS:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-backup-ndbmtd-ndbmtd-0                  Bound    pvc-55d60044-2df4-47b7-92aa-4d27d166db7f   2Gi        RWO            standard       <unset>                 30h
    pvc-backup-ndbmtd-ndbmtd-1                  Bound    pvc-1d81991c-7c9a-4d79-9265-ee6c5ddaa8f2   2Gi        RWO            standard       <unset>                 30h
    pvc-ndbmtd-ndbmtd-0                         Bound    pvc-8d55a76a-17e8-47d0-bb4b-8a6c31c76c96   3Gi        RWO            standard       <unset>                 30h
    pvc-ndbmtd-ndbmtd-1                         Bound    pvc-2c6dd1ed-ab48-40f4-ad23-43d9b030f03f   3Gi        RWO            standard       <unset>                 30h
    2024-12-17T14:43:51Z INFO - after_pvcs:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-backup-ndbmtd-ndbmtd-0                  Bound    pvc-83582dba-f394-4a1e-b1b1-3a1466f643e8   3Gi        RWO            standard       <unset>                 3m6s
    pvc-backup-ndbmtd-ndbmtd-1                  Bound    pvc-419ea3ef-af00-4e58-a5fb-9a774c764a6e   3Gi        RWO            standard       <unset>                 2m12s
    pvc-ndbmtd-ndbmtd-0                         Bound    pvc-0aa26c58-471d-4be3-b148-9c192165daf6   4Gi        RWO            standard       <unset>                 3m6s
    pvc-ndbmtd-ndbmtd-1                         Bound    pvc-fa0c7826-337d-4805-a1dd-2b6f2961270a   4Gi        RWO            standard       <unset>                 2m12s
    2024-12-17T14:43:51Z INFO - ****************************************************************************************************
    2024-12-17T14:43:51Z INFO - END PHASE 6: Post-processing
    2024-12-17T14:43:51Z INFO - ****************************************************************************************************
    2024-12-17T14:43:51Z INFO - Ended timer PHASE 6: 1734446631
    2024-12-17T14:43:51Z INFO - PHASE 6 took: 00 hr. 00 min. 00 sec.
    2024-12-17T14:43:51Z INFO - Ended timer dbtscale_vertical_pvc: 1734446631
    2024-12-17T14:43:51Z INFO - dbtscale_vertical_pvc took: 00 hr. 03 min. 36 sec.
    2024-12-17T14:43:51Z INFO - dbtscale_vertical_pvc completed successfully
7.3.2.2 Scaling ndbappmysqld Pods
This section provides the procedures to vertically scale up ndbappmysqld pods.

Note:

  • Before scaling the pods, ensure that the worker nodes have adequate resources to support scaling.
  • Before you proceed with the vertical scaling procedure for ndbappmysqld, perform a Helm test and ensure that all the cnDBTier services are running smoothly.

Updating CPU and RAM

Perform the following steps to update CPU and RAM using the custom_values.yaml file:
  1. Configure the required CPU and memory values in the global section of the custom_values.yaml file.

    For example:

    ndbapp:
      resources:
        limits:
          cpu: 8
          memory: 10Gi
        requests:
          cpu: 8
          memory: 10Gi
  2. Upgrade cnDBTier by performing a Helm upgrade with the modified custom_values.yaml file. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    cnDBTier supports upgrade in a TLS enabled cluster in this scenario only.

Updating PVC

Updating PVC Using Helm Upgrade
Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for ndbappmysqld pods:
    global:
      ndbapp:
        ndbdisksize: 10Gi
  2. Delete the ndbappmysqld statefulset and make the dependencies as orphan:
    $ kubectl -n occne-cndbtier delete sts --cascade=orphan ndbappmysqld
  3. Delete the ndbappmysqld-1 pod and patch its PVCs with the new PVC value:
    1. Delete the ndbappmysqld-1 pod:
      $ kubectl -n occne-cndbtier delete pod ndbappmysqld-1

      Sample output:

      pod "ndbappmysqld-1" deleted

    2. Patch the PVCs of ndbappmysqld-1 pod with the new PVC values:
      $ kubectl -n occne-cndbtier patch -p '{ "spec": { "resources": { "requests": { "storage": "10Gi" }}}}' pvc pvc-ndbappmysqld-ndbappmysqld-1
    3. Wait for the new PV to attach with the PVC.
  4. Upgrade cnDBTier with the modified custom_values.yaml file:
    $ helm upgrade mysql-cluster occndbtier -f occndbtier/custom_values.yaml -n occne-cndbtier --no-hooks
  5. Repeat steps b through d for ndbappmysqld-0.
  6. As cnDBTier Helm upgrade is performed with the "--no-hooks" option in step d, upgradeStrategy of the cnDBTier StatefulSets is changed to OnDelete. Therefore, perform the cnDBTier upgrade one more time to restore upgradeStrategy to RollingRestart:

    Note:

    Before running the following command, ensure that you set the value of OCCNE_NAMESPACE variable in the command with your cnDBTier namespace.
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml
Updating PVC Using dbtscale_vertical_pvc
dbtscale_vertical_pvc is an automated script to update PVC without manual intervention. Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for ndbmtd pods. Ensure that you modify the global.ndbapp.ndbdisksize section only.

    For example:

    The following code block shows the old PVC values in the custom_values.yaml file:
    global:
      ndbapp:
        ndbdisksize: 3Gi
    The following code block shows the updated PVC values in the custom_values.yaml file:
    global:
      ndbapp:
        ndbdisksize: 4Gi
  2. Run the following commands to navigate to the Artifacts/Scripts/tools/ directory and source the source_me file:

    Note:

    • Ensure that the Artifacts/Scripts/tools/ directory contains the source_me file.
    • After running the script, enter the namespace of the cnDBTier cluster when prompted.
    $ cd Artifacts/Scripts/tools/
    $ source ./source_me
    Sample output:
    Enter cndbtier namespace: occne-cndbtier
    DBTIER_NAMESPACE = "occne-cndbtier"
    
    DBTIER_LIB=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/lib
    
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin to PATH
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts to PATH
    PATH=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts:/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin:/home/dbtuser/.local/bin:/home/dbtuser/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
  3. Run the dbtscale_vertical_pvc script with the –pods=ndbappmysqld option and pass the helm charts and the updated custom_values.yaml file as parameters:
    $ dbtscale_vertical_pvc --pods=ndbappmysqld occndbtier custom_values_scale_pvc.yaml
    Sample output:
    2024-12-17T18:36:39Z INFO - HELM_CHARTS = occndbtier
    2024-12-17T18:36:39Z INFO - CUSTOM_VALUES_FILE = custom_values_scale_pvc.yaml
    dbtscale_vertical_pvc 25.1.100
    Copyright (c) 2024 Oracle and/or its affiliates. All rights reserved.
    2024-12-17T18:36:39Z INFO - Started timer dbtscale_vertical_pvc: 1734460599
    2024-12-17T18:36:39Z INFO - Started timer PHASE 0: 1734460599
    2024-12-17T18:36:39Z INFO - ****************************************************************************************************
    2024-12-17T18:36:39Z INFO - BEGIN PHASE 0: Collect Site information
    2024-12-17T18:36:39Z INFO - ****************************************************************************************************
    2024-12-17T18:36:39Z INFO - Using IPv4: LOOPBACK_IP="127.0.0.1"
    2024-12-17T18:36:39Z INFO - DBTIER_NAMESPACE = occne-cndbtier
    ...
    2024-12-17T18:36:51Z INFO - PODS_TO_SCALE="ndbappmysqld"
    2024-12-17T18:36:51Z INFO - POD_TYPE="ndbapp"
    2024-12-17T18:36:51Z INFO - STS_TO_DELETE="ndbappmysqld"
    2024-12-17T18:36:51Z INFO - PODS_TO_RESTART="APP_PODS"
    2024-12-17T18:36:51Z INFO - REPL_LEADER_DEPLOY = mysql-cluster-site-1-site-2-replication-svc
    2024-12-17T18:36:51Z INFO - REPL_LEADER_PVC = pvc-site-1-site-2-replication-svc
    2024-12-17T18:36:51Z INFO - REPL_LEADER_POD = mysql-cluster-site-1-site-2-replication-svc-58856dds8ql
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - END PHASE 0: Collect Site information
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - Ended timer PHASE 0: 1734460611
    2024-12-17T18:36:51Z INFO - PHASE 0 took: 00 hr. 00 min. 12 sec.
    2024-12-17T18:36:51Z INFO - Started timer PHASE 1: 1734460611
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - BEGIN PHASE 1: Verify disk sizes are supported
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - Current PVC sizes:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbappmysqld-ndbappmysqld-0             Bound    pvc-64e675c9-458f-48dd-a827-f43784f695ae   3Gi        RWO            standard       <unset>                 33h
    pvc-ndbappmysqld-ndbappmysqld-1             Bound    pvc-4e9879d8-ce42-4115-9d7b-4d079d7d84ea   3Gi        RWO            standard       <unset>                 33h
    2024-12-17T18:36:51Z INFO - Requested PVC sizes from custom_values_scale_pvc.yaml:
      ndbapp:
        ndbdisksize: 4Gi
    2024-12-17T18:36:51Z INFO - Current and requested disk values:
    2024-12-17T18:36:51Z INFO - current_pvcsz=3Gi (3221225472), requested_pvcsz=4Gi (4294967296)
    2024-12-17T18:36:51Z INFO - Verifying current and requested disk sizes...
    2024-12-17T18:36:51Z INFO - Requested PVC values should not be equal to current - PASSED
    2024-12-17T18:36:51Z INFO - No requested value should be smaller than current - PASSED
    2024-12-17T18:36:51Z INFO - No requested value should be zero or negative - PASSED
    2024-12-17T18:36:51Z INFO - At least one requested value should be larger than its current value - PASSED
    2024-12-17T18:36:51Z INFO - db-replication-svc requested PVC values should equal to current - PASSED
    2024-12-17T18:36:51Z INFO - ndbmysqld requested PVC values should equal to current - PASSED
    2024-12-17T18:36:51Z INFO - ndbmtd requested PVC values should equal to current - PASSED
    2024-12-17T18:36:51Z INFO - Verified current and requested disk sizes.
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - END PHASE 1: Verify disk sizes are supported
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - Ended timer PHASE 1: 1734460611
    2024-12-17T18:36:51Z INFO - PHASE 1 took: 00 hr. 00 min. 00 sec.
    2024-12-17T18:36:51Z INFO - Started timer PHASE 2: 1734460611
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - BEGIN PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T18:36:51Z INFO - ****************************************************************************************************
    2024-12-17T18:36:51Z INFO - Deleting STS...
    2024-12-17T18:36:51Z INFO - kubectl -n occne-cndbtier delete sts --cascade=orphan "ndbappmysqld"
    statefulset.apps "ndbappmysqld" deleted
    2024-12-17T18:36:52Z INFO - STS deleted
    2024-12-17T18:36:52Z INFO - ****************************************************************************************************
    2024-12-17T18:36:52Z INFO - END PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T18:36:52Z INFO - ****************************************************************************************************
    2024-12-17T18:36:52Z INFO - Ended timer PHASE 2: 1734460612
    2024-12-17T18:36:52Z INFO - PHASE 2 took: 00 hr. 00 min. 01 sec.
    2024-12-17T18:36:52Z INFO - Started timer PHASE 3: 1734460612
    2024-12-17T18:36:52Z INFO - ****************************************************************************************************
    2024-12-17T18:36:52Z INFO - BEGIN PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T18:36:52Z INFO - ****************************************************************************************************
    2024-12-17T18:36:52Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 18:36:52 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 12
    2024-12-17T18:36:56Z INFO - Helm upgrade returned
    2024-12-17T18:36:56Z INFO - ****************************************************************************************************
    2024-12-17T18:36:56Z INFO - END PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T18:36:56Z INFO - ****************************************************************************************************
    2024-12-17T18:36:56Z INFO - Ended timer PHASE 3: 1734460616
    2024-12-17T18:36:56Z INFO - PHASE 3 took: 00 hr. 00 min. 04 sec.
    2024-12-17T18:36:56Z INFO - Started timer PHASE 4: 1734460616
    2024-12-17T18:36:56Z INFO - ****************************************************************************************************
    2024-12-17T18:36:56Z INFO - BEGIN PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T18:36:56Z INFO - ****************************************************************************************************
    2024-12-17T18:36:56Z INFO - Restart pods after deleting their PVCs...
    2024-12-17T18:36:56Z INFO - pods_to_delete = (ndbappmysqld-0 ndbappmysqld-1)
    2024-12-17T18:36:56Z INFO - Deleting PVCs for ndbappmysqld-0 (pvc-ndbappmysqld-ndbappmysqld-0)...
    2024-12-17T18:36:56Z INFO - deleting pod ndbappmysqld-0 ...
    persistentvolumeclaim "pvc-ndbappmysqld-ndbappmysqld-0" deleted
    pod "ndbappmysqld-0" deleted
    2024-12-17T18:37:01Z INFO - Waiting for condition: is_pod ndbappmysqld-0...
    2024-12-17T18:37:01Z INFO - Condition occurred
    2024-12-17T18:37:51Z INFO - Waiting for condition: is_pod_ready ndbappmysqld-0...
    2024-12-17T18:37:51Z INFO - Condition occurred
    2024-12-17T18:37:51Z INFO - Deleting PVCs for ndbappmysqld-1 (pvc-ndbappmysqld-ndbappmysqld-1)...
    2024-12-17T18:37:51Z INFO - deleting pod ndbappmysqld-1 ...
    persistentvolumeclaim "pvc-ndbappmysqld-ndbappmysqld-1" deleted
    pod "ndbappmysqld-1" deleted
    2024-12-17T18:37:55Z INFO - Waiting for condition: is_pod ndbappmysqld-1...
    2024-12-17T18:37:55Z INFO - Condition occurred
    2024-12-17T18:38:36Z INFO - Waiting for condition: is_pod_ready ndbappmysqld-1...
    2024-12-17T18:38:36Z INFO - Condition occurred
    2024-12-17T18:38:36Z INFO - kubectl -n occne-cndbtier get pod ndbappmysqld-0 ndbappmysqld-1
    NAME             READY   STATUS    RESTARTS   AGE
    ndbappmysqld-0   2/2     Running   0          95s
    ndbappmysqld-1   2/2     Running   0          41s
    2024-12-17T18:38:36Z INFO - kubectl -n occne-cndbtier get pvc pvc-ndbappmysqld-ndbappmysqld-0 pvc-ndbappmysqld-ndbappmysqld-1
    NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbappmysqld-ndbappmysqld-0   Bound    pvc-ccd68518-ea7e-46b7-95f3-3db730a0f914   4Gi        RWO            standard       <unset>                 95s
    pvc-ndbappmysqld-ndbappmysqld-1   Bound    pvc-0f5ba7b1-bb0a-4125-bd5e-738b8b486bc6   4Gi        RWO            standard       <unset>                 41s
    2024-12-17T18:38:36Z INFO - Pods restarted and their PVCs recreated.
    2024-12-17T18:38:36Z INFO - ****************************************************************************************************
    2024-12-17T18:38:36Z INFO - END PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T18:38:36Z INFO - ****************************************************************************************************
    2024-12-17T18:38:36Z INFO - Ended timer PHASE 4: 1734460716
    2024-12-17T18:38:36Z INFO - PHASE 4 took: 00 hr. 01 min. 40 sec.
    2024-12-17T18:38:36Z INFO - Started timer PHASE 5: 1734460716
    2024-12-17T18:38:36Z INFO - ****************************************************************************************************
    2024-12-17T18:38:36Z INFO - BEGIN PHASE 5: Upgrade cnDBTier
    2024-12-17T18:38:36Z INFO - ****************************************************************************************************
    2024-12-17T18:38:36Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 18:38:36 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 13
    2024-12-17T18:40:17Z INFO - Helm upgrade returned
    2024-12-17T18:40:17Z INFO - ****************************************************************************************************
    2024-12-17T18:40:17Z INFO - END PHASE 5: Upgrade cnDBTier
    2024-12-17T18:40:17Z INFO - ****************************************************************************************************
    2024-12-17T18:40:17Z INFO - Ended timer PHASE 5: 1734460817
    2024-12-17T18:40:17Z INFO - PHASE 5 took: 00 hr. 01 min. 41 sec.
    2024-12-17T18:40:17Z INFO - Started timer PHASE 6: 1734460817
    2024-12-17T18:40:17Z INFO - ****************************************************************************************************
    2024-12-17T18:40:17Z INFO - BEGIN PHASE 6: Post-processing
    2024-12-17T18:40:17Z INFO - ****************************************************************************************************
    2024-12-17T18:40:17Z INFO - CURRENT_PVCS:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbappmysqld-ndbappmysqld-0             Bound    pvc-64e675c9-458f-48dd-a827-f43784f695ae   3Gi        RWO            standard       <unset>                 33h
    pvc-ndbappmysqld-ndbappmysqld-1             Bound    pvc-4e9879d8-ce42-4115-9d7b-4d079d7d84ea   3Gi        RWO            standard       <unset>                 33h
    2024-12-17T18:40:17Z INFO - after_pvcs:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbappmysqld-ndbappmysqld-0             Bound    pvc-ccd68518-ea7e-46b7-95f3-3db730a0f914   4Gi        RWO            standard       <unset>                 3m16s
    pvc-ndbappmysqld-ndbappmysqld-1             Bound    pvc-0f5ba7b1-bb0a-4125-bd5e-738b8b486bc6   4Gi        RWO            standard       <unset>                 2m22s
    2024-12-17T18:40:17Z INFO - ****************************************************************************************************
    2024-12-17T18:40:17Z INFO - END PHASE 6: Post-processing
    2024-12-17T18:40:17Z INFO - ****************************************************************************************************
    2024-12-17T18:40:17Z INFO - Ended timer PHASE 6: 1734460817
    2024-12-17T18:40:17Z INFO - PHASE 6 took: 00 hr. 00 min. 00 sec.
    2024-12-17T18:40:17Z INFO - Ended timer dbtscale_vertical_pvc: 1734460817
    2024-12-17T18:40:17Z INFO - dbtscale_vertical_pvc took: 00 hr. 03 min. 38 sec.
    2024-12-17T18:40:17Z INFO - dbtscale_vertical_pvc completed successfully
7.3.2.3 Scaling ndbmysqld Pods
This section provides the procedures to vertically scale up ndbmysqld pods.

Note:

  • Before scaling the pods, ensure that the worker nodes have adequate resources to support scaling.
  • Before you proceed with the vertical scaling procedure for ndbmysqld, perform a Helm test and ensure that all the cnDBTier services are running smoothly.

Updating CPU and RAM

Perform the following steps to update CPU and RAM using the custom_values.yaml file:
  1. Configure the required CPU and memory values in the global section of the custom_values.yaml file.

    For example:

    api:
      resources:
        limits:
          cpu: 8
          memory: 10Gi
        requests:
          cpu: 8
          memory: 10Gi
  2. Upgrade cnDBTier by performing a Helm upgrade with the modified custom_values.yaml file. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Updating PVC

Updating PVC Using Helm Upgrade
Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for ndbmysqld pods:
    global:
      api:
        ndbdisksize: 10Gi
  2. Delete the ndbmysqld statefulset and make the dependencies as orphan:
    $ kubectl -n occne-cndbtier delete sts --cascade=orphan ndbmysqld
  3. Delete the ndbmysqld-1 pod and patch its PVCs with the new PVC value:
    1. Delete the ndbmysqld-1 pod:
      $ kubectl -n occne-cndbtier delete pod ndbmysqld-1

      Sample output:

      pod "ndbmysqld-1" deleted

    2. Patch the PVCs of ndbmysqld-1 pod with the new PVC values:
      $ kubectl -n occne-cndbtier patch -p '{ "spec": { "resources": { "requests": { "storage": "10Gi" }}}}' pvc pvc-ndbmysqld-ndbmysqld-1
  4. Wait for the new PV to attach with the PVC. Run the following command to check if the PV reflects the updated PVC values:
    $ kubectl get pv | grep -w occne-cndbtier | grep ndbmysqld-1
    Sample output:
    pvc-ee960e06-6fae-48de-bd8d-1212fb26c24b   10Gi        RWO            Delete           Bound         occne-cndbtier/pvc-ndbmysqld-ndbmysqld-1
  5. Upgrade cnDBTier with the modified custom_values.yaml file:
    $ helm upgrade mysql-cluster occndbtier -f occndbtier/custom_values.yaml -n occne-cndbtier --no-hooks
  6. Repeat steps b through e for ndbmysqld-0.
  7. As cnDBTier Helm upgrade is performed with the "--no-hooks" option in step d, upgradeStrategy of the cnDBTier StatefulSets is changed to OnDelete. Therefore, perform the cnDBTier upgrade one more time to restore upgradeStrategy to RollingRestart:

    Note:

    Before running the following command, ensure that you set the value of OCCNE_NAMESPACE variable in the command with your cnDBTier namespace.
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml
Updating PVC Using dbtscale_vertical_pvc
dbtscale_vertical_pvc is an automated script to update PVC without manual intervention. Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for ndbmysqld pods. Ensure that you modify the global.api.ndbdisksize section only.

    For example:

    The following code block shows the old PVC values in the custom_values.yaml file:
    global:
      api:
        ndbdisksize: 3Gi
    The following code block shows the updated PVC values in the custom_values.yaml file:
    global:
      api:
        ndbdisksize: 4Gi
  2. Run the following commands to navigate to the Artifacts/Scripts/tools/ directory and source the source_me file:

    Note:

    • Ensure that the Artifacts/Scripts/tools/ directory contains the source_me file.
    • After running the script, enter the namespace of the cnDBTier cluster when prompted.
    $ cd Artifacts/Scripts/tools/
    $ source ./source_me
    Sample output:
    Enter cndbtier namespace: occne-cndbtier
    DBTIER_NAMESPACE = "occne-cndbtier"
    
    DBTIER_LIB=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/lib
    
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin to PATH
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts to PATH
    PATH=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts:/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin:/home/dbtuser/.local/bin:/home/dbtuser/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
    
  3. Run the dbtscale_vertical_pvc script with the –pods=ndbmysqld option and pass the helm charts and the updated custom_values.yaml file as parameters:
    $ dbtscale_vertical_pvc --pods=ndbmysqld occndbtier custom_values_scale_pvc.yaml
    Sample output:
    2024-12-17T18:54:40Z INFO - HELM_CHARTS = occndbtier
    2024-12-17T18:54:40Z INFO - CUSTOM_VALUES_FILE = custom_values_scale_pvc.yaml
    dbtscale_vertical_pvc 25.1.100
    Copyright (c) 2024 Oracle and/or its affiliates. All rights reserved.
    2024-12-17T18:54:40Z INFO - Started timer dbtscale_vertical_pvc: 1734461680
    2024-12-17T18:54:40Z INFO - Started timer PHASE 0: 1734461680
    2024-12-17T18:54:40Z INFO - ****************************************************************************************************
    2024-12-17T18:54:40Z INFO - BEGIN PHASE 0: Collect Site information
    2024-12-17T18:54:40Z INFO - ****************************************************************************************************
    2024-12-17T18:54:40Z INFO - Using IPv4: LOOPBACK_IP="127.0.0.1"
    2024-12-17T18:54:40Z INFO - DBTIER_NAMESPACE = occne-cndbtier
    ...
    2024-12-17T18:54:52Z INFO - PODS_TO_SCALE="ndbmysqld"
    2024-12-17T18:54:52Z INFO - POD_TYPE="api"
    2024-12-17T18:54:52Z INFO - STS_TO_DELETE="ndbmysqld"
    2024-12-17T18:54:52Z INFO - PODS_TO_RESTART="API_PODS"
    2024-12-17T18:54:52Z INFO - REPL_LEADER_DEPLOY = mysql-cluster-site-1-site-2-replication-svc
    2024-12-17T18:54:52Z INFO - REPL_LEADER_PVC = pvc-site-1-site-2-replication-svc
    2024-12-17T18:54:52Z INFO - REPL_LEADER_POD = mysql-cluster-site-1-site-2-replication-svc-58856dds8ql
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - END PHASE 0: Collect Site information
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - Ended timer PHASE 0: 1734461692
    2024-12-17T18:54:52Z INFO - PHASE 0 took: 00 hr. 00 min. 12 sec.
    2024-12-17T18:54:52Z INFO - Started timer PHASE 1: 1734461692
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - BEGIN PHASE 1: Verify disk sizes are supported
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - Current PVC sizes:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmysqld-ndbmysqld-0                   Bound    pvc-09a977c0-8fea-47cc-b010-8ac02f870e53   3Gi        RWO            standard       <unset>                 34h
    pvc-ndbmysqld-ndbmysqld-1                   Bound    pvc-d136cd51-f4f0-4142-b92d-45f40d4fe65c   3Gi        RWO            standard       <unset>                 34h
    2024-12-17T18:54:52Z INFO - Requested PVC sizes from custom_values_scale_pvc.yaml:
      api:
        ndbdisksize: 4Gi
    2024-12-17T18:54:52Z INFO - Current and requested disk values:
    2024-12-17T18:54:52Z INFO - current_pvcsz=3Gi (3221225472), requested_pvcsz=4Gi (4294967296)
    2024-12-17T18:54:52Z INFO - Verifying current and requested disk sizes...
    2024-12-17T18:54:52Z INFO - Requested PVC values should not be equal to current - PASSED
    2024-12-17T18:54:52Z INFO - No requested value should be smaller than current - PASSED
    2024-12-17T18:54:52Z INFO - No requested value should be zero or negative - PASSED
    2024-12-17T18:54:52Z INFO - At least one requested value should be larger than its current value - PASSED
    2024-12-17T18:54:52Z INFO - db-replication-svc requested PVC values should equal to current - PASSED
    2024-12-17T18:54:52Z INFO - ndbappmysqld requested PVC values should equal to current - PASSED
    2024-12-17T18:54:52Z INFO - ndbmtd requested PVC values should equal to current - PASSED
    2024-12-17T18:54:52Z INFO - Verified current and requested disk sizes.
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - END PHASE 1: Verify disk sizes are supported
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - Ended timer PHASE 1: 1734461692
    2024-12-17T18:54:52Z INFO - PHASE 1 took: 00 hr. 00 min. 00 sec.
    2024-12-17T18:54:52Z INFO - Started timer PHASE 2: 1734461692
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - BEGIN PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - Deleting STS...
    2024-12-17T18:54:52Z INFO - kubectl -n occne-cndbtier delete sts --cascade=orphan "ndbmysqld"
    statefulset.apps "ndbmysqld" deleted
    2024-12-17T18:54:52Z INFO - STS deleted
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - END PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - Ended timer PHASE 2: 1734461692
    2024-12-17T18:54:52Z INFO - PHASE 2 took: 00 hr. 00 min. 00 sec.
    2024-12-17T18:54:52Z INFO - Started timer PHASE 3: 1734461692
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - BEGIN PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T18:54:52Z INFO - ****************************************************************************************************
    2024-12-17T18:54:52Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 18:54:53 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 14
    2024-12-17T18:54:57Z INFO - Helm upgrade returned
    2024-12-17T18:54:57Z INFO - ****************************************************************************************************
    2024-12-17T18:54:57Z INFO - END PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T18:54:57Z INFO - ****************************************************************************************************
    2024-12-17T18:54:57Z INFO - Ended timer PHASE 3: 1734461697
    2024-12-17T18:54:57Z INFO - PHASE 3 took: 00 hr. 00 min. 05 sec.
    2024-12-17T18:54:57Z INFO - Started timer PHASE 4: 1734461697
    2024-12-17T18:54:57Z INFO - ****************************************************************************************************
    2024-12-17T18:54:57Z INFO - BEGIN PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T18:54:57Z INFO - ****************************************************************************************************
    2024-12-17T18:54:57Z INFO - Restart pods after deleting their PVCs...
    2024-12-17T18:54:57Z INFO - pods_to_delete = (ndbmysqld-0 ndbmysqld-1)
    2024-12-17T18:54:57Z INFO - Deleting PVCs for ndbmysqld-0 (pvc-ndbmysqld-ndbmysqld-0)...
    2024-12-17T18:54:57Z INFO - deleting pod ndbmysqld-0 ...
    persistentvolumeclaim "pvc-ndbmysqld-ndbmysqld-0" deleted
    pod "ndbmysqld-0" deleted
    2024-12-17T18:55:08Z INFO - Waiting for condition: is_pod ndbmysqld-0...
    2024-12-17T18:55:08Z INFO - Condition occurred
    2024-12-17T18:55:49Z INFO - Waiting for condition: is_pod_ready ndbmysqld-0...
    2024-12-17T18:55:49Z INFO - Condition occurred
    2024-12-17T18:55:49Z INFO - Deleting PVCs for ndbmysqld-1 (pvc-ndbmysqld-ndbmysqld-1)...
    2024-12-17T18:55:49Z INFO - deleting pod ndbmysqld-1 ...
    persistentvolumeclaim "pvc-ndbmysqld-ndbmysqld-1" deleted
    pod "ndbmysqld-1" deleted
    2024-12-17T18:56:00Z INFO - Waiting for condition: is_pod ndbmysqld-1...
    2024-12-17T18:56:00Z INFO - Condition occurred
    2024-12-17T18:56:40Z INFO - Waiting for condition: is_pod_ready ndbmysqld-1...
    2024-12-17T18:56:40Z INFO - Condition occurred
    2024-12-17T18:56:40Z INFO - kubectl -n occne-cndbtier get pod ndbmysqld-0 ndbmysqld-1
    NAME          READY   STATUS    RESTARTS   AGE
    ndbmysqld-0   3/3     Running   0          92s
    ndbmysqld-1   3/3     Running   0          41s
    2024-12-17T18:56:40Z INFO - kubectl -n occne-cndbtier get pvc pvc-ndbmysqld-ndbmysqld-0 pvc-ndbmysqld-ndbmysqld-1
    NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmysqld-ndbmysqld-0   Bound    pvc-84025c03-7f43-4c9a-8042-1c241ba365b5   4Gi        RWO            standard       <unset>                 92s
    pvc-ndbmysqld-ndbmysqld-1   Bound    pvc-7f626556-8516-46fb-ac1c-b0196b02fecf   4Gi        RWO            standard       <unset>                 41s
    2024-12-17T18:56:40Z INFO - Pods restarted and their PVCs recreated.
    2024-12-17T18:56:40Z INFO - ****************************************************************************************************
    2024-12-17T18:56:40Z INFO - END PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T18:56:40Z INFO - ****************************************************************************************************
    2024-12-17T18:56:40Z INFO - Ended timer PHASE 4: 1734461800
    2024-12-17T18:56:40Z INFO - PHASE 4 took: 00 hr. 01 min. 43 sec.
    2024-12-17T18:56:40Z INFO - Started timer PHASE 5: 1734461800
    2024-12-17T18:56:40Z INFO - ****************************************************************************************************
    2024-12-17T18:56:40Z INFO - BEGIN PHASE 5: Upgrade cnDBTier
    2024-12-17T18:56:40Z INFO - ****************************************************************************************************
    2024-12-17T18:56:40Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 18:56:41 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 15
    2024-12-17T18:59:02Z INFO - Helm upgrade returned
    2024-12-17T18:59:02Z INFO - ****************************************************************************************************
    2024-12-17T18:59:02Z INFO - END PHASE 5: Upgrade cnDBTier
    2024-12-17T18:59:02Z INFO - ****************************************************************************************************
    2024-12-17T18:59:02Z INFO - Ended timer PHASE 5: 1734461942
    2024-12-17T18:59:02Z INFO - PHASE 5 took: 00 hr. 02 min. 22 sec.
    2024-12-17T18:59:02Z INFO - Started timer PHASE 6: 1734461942
    2024-12-17T18:59:02Z INFO - ****************************************************************************************************
    2024-12-17T18:59:02Z INFO - BEGIN PHASE 6: Post-processing
    2024-12-17T18:59:02Z INFO - ****************************************************************************************************
    2024-12-17T18:59:02Z INFO - CURRENT_PVCS:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmysqld-ndbmysqld-0                   Bound    pvc-09a977c0-8fea-47cc-b010-8ac02f870e53   3Gi        RWO            standard       <unset>                 34h
    pvc-ndbmysqld-ndbmysqld-1                   Bound    pvc-d136cd51-f4f0-4142-b92d-45f40d4fe65c   3Gi        RWO            standard       <unset>                 34h
    2024-12-17T18:59:02Z INFO - after_pvcs:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmysqld-ndbmysqld-0                   Bound    pvc-84025c03-7f43-4c9a-8042-1c241ba365b5   4Gi        RWO            standard       <unset>                 3m54s
    pvc-ndbmysqld-ndbmysqld-1                   Bound    pvc-7f626556-8516-46fb-ac1c-b0196b02fecf   4Gi        RWO            standard       <unset>                 3m3s
    2024-12-17T18:59:02Z INFO - ****************************************************************************************************
    2024-12-17T18:59:02Z INFO - END PHASE 6: Post-processing
    2024-12-17T18:59:02Z INFO - ****************************************************************************************************
    2024-12-17T18:59:02Z INFO - Ended timer PHASE 6: 1734461942
    2024-12-17T18:59:02Z INFO - PHASE 6 took: 00 hr. 00 min. 00 sec.
    2024-12-17T18:59:02Z INFO - Ended timer dbtscale_vertical_pvc: 1734461942
    2024-12-17T18:59:02Z INFO - dbtscale_vertical_pvc took: 00 hr. 04 min. 22 sec.
    2024-12-17T18:59:02Z INFO - dbtscale_vertical_pvc completed successfully
7.3.2.4 Vertical Scaling of db-replication-svc Pods

This section provides the procedures to vertically scale up db-replication-svc pods.

Note:

  • Before scaling the pods, ensure that the worker nodes have adequate resources to support scaling.
  • Before you proceed with the vertical scaling of db-replication-svc, perform a Helm test and ensure that all the cnDBTier services are running smoothly.

Updating CPU and RAM

Perform the following steps to update CPU and RAM using the custom_values.yaml file:
  1. Configure the required CPU and memory values in the custom_values.yaml file.

    For example:

    db-replication-svc:
      resources:
        limits:
          cpu: 1
          memory: 2048Mi
          ephemeral-storage: 1Gi
        requests:
          cpu: 0.6
          memory: 1024Mi
          ephemeral-storage: 90Mi
  2. Upgrade cnDBTier by performing a Helm upgrade with the modified custom_values.yaml file. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    cnDBTier supports upgrade in a TLS enabled cluster in this scenario only.

Updating PVC

Updating PVC Using Helm Upgrade
Perform the following steps to update the PVC value using custom_values.yaml:
  1. Perform the following steps to scale down the replication service pod:
    1. Identify the replication service of cnDBTier cluster1 with respect to cnDBTier cluster2:
      $ kubectl get deployment --namespace=cluster1
      Sample output:
      NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
      mysql-cluster-cluster1-cluster2-replication-svc   1/1     1            1           18m
      mysql-cluster-db-backup-manager-svc               1/1     1            1           18m
      mysql-cluster-db-monitor-svc                      1/1     1            1           18m
    2. Scale down the replication service of cnDBTier cluster1 with respect to cnDBTier cluster2:
      $ kubectl scale deployment mysql-cluster-cluster1-cluster2-replication-svc --namespace=cluster1 --replicas=0
      Sample output:
      deployment.apps/mysql-cluster-cluster1-cluster2-replication-svc scaled
  2. Update the custom_values.yaml file with the new PVC values for ndbmysqld pods:

    The following example shows the PVC value in the custom_values.yaml file updated from 8Gi to 10Gi:

    Old PVC value:
    db-replication-svc:
      dbreplsvcdeployments:
        - name: cluster1-cluster2-replication-svc
          enabled: true
          pvc:
            name: pvc-cluster1-cluster2-replication-svc
            disksize: 8Gi

    New PVC value:

    db-replication-svc:
      dbreplsvcdeployments:
        - name: cluster1-cluster2-replication-svc
          enabled: true
          pvc:
            name: pvc-cluster1-cluster2-replication-svc
            disksize: 10Gi

    Note:

    If you are providing a pod prefix for the DB replication service name, then use a name that is unique and small.
  3. Patch PVCs of the DB replication service with the new PVC value:
    $ kubectl -n cluster1 patch -p '{ "spec": { "resources": { "requests": { "storage": "10Gi" }}}}' pvc pvc-cluster1-cluster2-replication-svc
    Sample output:
    persistentvolumeclaim/pvc-cluster1-cluster2-replication-svc patched
  4. Wait until new PV returns and gets attached to the existing PVC. Run the following commands to check the events:
    $ kubectl get events -n cluster1
    $ kubectl get pv | grep cluster1 | grep repl
    Sample output:
    pvc-5b9917ff-53d4-44cb-a5ef-3e37b201c376   8Gi        RWO            Delete           Bound    cluster2/pvc-cluster2-cluster1-replication-svc   occne-dbtier-sc            12m
    pvc-e6526259-d007-4868-a4e2-d53a8569edc8   10Gi       RWO            Delete           Bound    cluster1/pvc-cluster1-cluster2-replication-svc   occne-dbtier-sc            15m
  5. Upgrade cnDBTier with the modified custom_values.yaml file:
    $ helm upgrade mysql-cluster occndbtier -f occndbtier/custom_values.yaml -n cluster1 

    When the helm upgrade is complete, the db-replication-svc pod comes up with the extended PVC size.

Updating PVC Using dbtscale_vertical_pvc
dbtscale_vertical_pvc is an automated script to update PVC without manual intervention. Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for the replication service pods. Ensure that you modify the .db-replication-svc.dbreplsvcdeployments[0].pvc.disksize section only.

    For example:

    The following code block shows the old PVC values in the custom_values.yaml file:
    db-replication-svc:
      dbreplsvcdeployments:
        # if pod prefix is given then use the unique smaller name for this db replication service.
        - name: site-1-site-2-replication-svc
          enabled: true
          pvc:
            name: pvc-site-1-site-2-replication-svc
            disksize: 3Gi 
    The following code block shows the updated PVC values in the custom_values.yaml file:
    db-replication-svc:
      dbreplsvcdeployments:
        # if pod prefix is given then use the unique smaller name for this db replication service.
        - name: site-1-site-2-replication-svc
          enabled: true
          pvc:
            name: pvc-site-1-site-2-replication-svc
            disksize: 4Gi 
  2. Run the following commands to navigate to the Artifacts/Scripts/tools/ directory and source the source_me file:

    Note:

    • Ensure that the Artifacts/Scripts/tools/ directory contains the source_me file.
    • After running the script, enter the namespace of the cnDBTier cluster when prompted.
    $ cd Artifacts/Scripts/tools/
    $ source ./source_me
    Sample output:
    Enter cndbtier namespace: occne-cndbtier
    DBTIER_NAMESPACE = "occne-cndbtier"
    
    DBTIER_LIB=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/lib
    
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin to PATH
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts to PATH
    PATH=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts:/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin:/home/dbtuser/.local/bin:/home/dbtuser/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
    
  3. Run the dbtscale_vertical_pvc script with the –pods=db-replication-svc option and pass the helm charts and the updated custom_values.yaml file as parameters:
    $ dbtscale_vertical_pvc --pods=db-replication-svc occndbtier custom_values_scale_pvc.yaml
    Sample output:
    2024-12-17T19:07:02Z INFO - HELM_CHARTS = occndbtier
    2024-12-17T19:07:02Z INFO - CUSTOM_VALUES_FILE = custom_values_scale_pvc.yaml
    dbtscale_vertical_pvc <24.3.0>
    Copyright (c) 2024 Oracle and/or its affiliates. All rights reserved.
    2024-12-17T19:07:02Z INFO - Started timer dbtscale_vertical_pvc: 1734462422
    2024-12-17T19:07:02Z INFO - Started timer PHASE 0: 1734462422
    2024-12-17T19:07:02Z INFO - ****************************************************************************************************
    2024-12-17T19:07:02Z INFO - BEGIN PHASE 0: Collect Site information
    2024-12-17T19:07:02Z INFO - ****************************************************************************************************
    2024-12-17T19:07:03Z INFO - Using IPv4: LOOPBACK_IP="127.0.0.1"
    2024-12-17T19:07:03Z INFO - DBTIER_NAMESPACE = occne-cndbtier
    ...
    2024-12-17T19:07:15Z INFO - PODS_TO_SCALE="db-replication-svc"
    2024-12-17T19:07:15Z INFO - POD_TYPE="db-replication-svc"
    2024-12-17T19:07:15Z INFO - REPL_LEADER_DEPLOY = mysql-cluster-site-1-site-2-replication-svc
    2024-12-17T19:07:15Z INFO - REPL_LEADER_PVC = pvc-site-1-site-2-replication-svc
    2024-12-17T19:07:15Z INFO - REPL_LEADER_POD = mysql-cluster-site-1-site-2-replication-svc-98476bwhhlp
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - END PHASE 0: Collect Site information
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - Ended timer PHASE 0: 1734462435
    2024-12-17T19:07:15Z INFO - PHASE 0 took: 00 hr. 00 min. 13 sec.
    2024-12-17T19:07:15Z INFO - Started timer PHASE 1: 1734462435
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - BEGIN PHASE 1: Verify disk sizes are supported
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - Current PVC sizes:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-site-1-site-2-replication-svc   Bound    pvc-1212dd3c-5100-467d-ac54-0c569116a507   3Gi        RWO            standard       <unset>                 37h
    2024-12-17T19:07:15Z INFO - Requested PVC sizes from custom_values_scale_pvc.yaml:
        - name: lfg-site-1-site-2-replication-svc
          enabled: true
          pvc:
            name: pvc-site-1-site-2-replication-svc
            disksize: 4Gi
    2024-12-17T19:07:15Z INFO - Current and requested disk values:
    2024-12-17T19:07:15Z INFO - current_pvcsz=3Gi (3221225472), requested_pvcsz=4Gi (4294967296)
    2024-12-17T19:07:15Z INFO - Verifying current and requested disk sizes...
    2024-12-17T19:07:15Z INFO - Requested PVC values should not be equal to current - PASSED
    2024-12-17T19:07:15Z INFO - No requested value should be smaller than current - PASSED
    2024-12-17T19:07:15Z INFO - No requested value should be zero or negative - PASSED
    2024-12-17T19:07:15Z INFO - At least one requested value should be larger than its current value - PASSED
    2024-12-17T19:07:15Z INFO - ndbmysqld requested PVC values should equal to current - PASSED
    2024-12-17T19:07:15Z INFO - ndbappmysqld requested PVC values should equal to current - PASSED
    2024-12-17T19:07:15Z INFO - ndbmtd requested PVC values should equal to current - PASSED
    2024-12-17T19:07:15Z INFO - Verified current and requested disk sizes.
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - END PHASE 1: Verify disk sizes are supported
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - Ended timer PHASE 1: 1734462435
    2024-12-17T19:07:15Z INFO - PHASE 1 took: 00 hr. 00 min. 00 sec.
    2024-12-17T19:07:15Z INFO - Started timer PHASE 2: 1734462435
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - BEGIN PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T19:07:15Z INFO - ****************************************************************************************************
    2024-12-17T19:07:15Z INFO - Scaling down deployment...
    2024-12-17T19:07:15Z INFO - REPL_LEADER_DEPLOY = mysql-cluster-site-1-site-2-replication-svc
    2024-12-17T19:07:15Z INFO - kubectl -n occne-cndbtier scale deployment mysql-cluster-site-1-site-2-replication-svc --replicas=0
    deployment.apps/mysql-cluster-site-1-site-2-replication-svc scaled
    2024-12-17T19:07:16Z INFO - Deployment scaled down
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - END PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - Ended timer PHASE 2: 1734462436
    2024-12-17T19:07:16Z INFO - PHASE 2 took: 00 hr. 00 min. 01 sec.
    2024-12-17T19:07:16Z INFO - Started timer PHASE 3: 1734462436
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - BEGIN PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - Patch pvc-site-1-site-2-replication-svc with the new value (4294967296).
    persistentvolumeclaim/pvc-site-1-site-2-replication-svc patched
    2024-12-17T19:07:16Z INFO - Replication service PVC patched.
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - END PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - Ended timer PHASE 3: 1734462436
    2024-12-17T19:07:16Z INFO - PHASE 3 took: 00 hr. 00 min. 00 sec.
    2024-12-17T19:07:16Z INFO - Started timer PHASE 4: 1734462436
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - BEGIN PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T19:07:16Z INFO - ****************************************************************************************************
    2024-12-17T19:07:16Z INFO - Wait for the new PV to bound to existing pvc.
    2024-12-17T19:07:19Z INFO - Waiting for condition: is_requested_size_and_bound...
    2024-12-17T19:07:19Z INFO - Condition occurred
    2024-12-17T19:07:19Z INFO - PV attached to pvc-site-1-site-2-replication-svc with requested size: 4294967296
    2024-12-17T19:07:19Z INFO - PV:
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                                STORAGECLASS        VOLUMEATTRIBUTESCLASS   REASON   AGE
    pvc-1212dd3c-5100-467d-ac54-0c569116a507   4Gi        RWO            Delete           Bound    occne-cndbtier/pvc-site-1-site-2-replication-svc                                                                     standard            <unset>                          37h
    2024-12-17T19:07:19Z INFO - ****************************************************************************************************
    2024-12-17T19:07:19Z INFO - END PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T19:07:19Z INFO - ****************************************************************************************************
    2024-12-17T19:07:19Z INFO - Ended timer PHASE 4: 1734462439
    2024-12-17T19:07:19Z INFO - PHASE 4 took: 00 hr. 00 min. 03 sec.
    2024-12-17T19:07:19Z INFO - Started timer PHASE 5: 1734462439
    2024-12-17T19:07:19Z INFO - ****************************************************************************************************
    2024-12-17T19:07:19Z INFO - BEGIN PHASE 5: Upgrade cnDBTier
    2024-12-17T19:07:19Z INFO - ****************************************************************************************************
    2024-12-17T19:07:19Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 19:07:20 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 16
    2024-12-17T19:09:19Z INFO - Helm upgrade returned
    2024-12-17T19:09:19Z INFO - ****************************************************************************************************
    2024-12-17T19:09:19Z INFO - END PHASE 5: Upgrade cnDBTier
    2024-12-17T19:09:19Z INFO - ****************************************************************************************************
    2024-12-17T19:09:19Z INFO - Ended timer PHASE 5: 1734462559
    2024-12-17T19:09:19Z INFO - PHASE 5 took: 00 hr. 02 min. 00 sec.
    2024-12-17T19:09:19Z INFO - Started timer PHASE 6: 1734462559
    2024-12-17T19:09:19Z INFO - ****************************************************************************************************
    2024-12-17T19:09:19Z INFO - BEGIN PHASE 6: Post-processing
    2024-12-17T19:09:19Z INFO - ****************************************************************************************************
    2024-12-17T19:09:19Z INFO - CURRENT_PVCS:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-site-1-site-2-replication-svc   Bound    pvc-1212dd3c-5100-467d-ac54-0c569116a507   3Gi        RWO            standard       <unset>                 37h
    2024-12-17T19:09:19Z INFO - after_pvcs:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-site-1-site-2-replication-svc   Bound    pvc-1212dd3c-5100-467d-ac54-0c569116a507   4Gi        RWO            standard       <unset>                 37h
    2024-12-17T19:09:19Z INFO - ****************************************************************************************************
    2024-12-17T19:09:19Z INFO - END PHASE 6: Post-processing
    2024-12-17T19:09:19Z INFO - ****************************************************************************************************
    2024-12-17T19:09:19Z INFO - Ended timer PHASE 6: 1734462559
    2024-12-17T19:09:19Z INFO - PHASE 6 took: 00 hr. 00 min. 00 sec.
    2024-12-17T19:09:19Z INFO - Ended timer dbtscale_vertical_pvc: 1734462559
    2024-12-17T19:09:19Z INFO - dbtscale_vertical_pvc took: 00 hr. 02 min. 17 sec.
    2024-12-17T19:09:19Z INFO - dbtscale_vertical_pvc completed successfully
7.3.2.5 Scaling of ndbmgmd Pods
This section provides the procedures to vertically scale up ndbmgmd pods.

Note:

  • Before scaling the pods, ensure that the worker nodes have adequate resources to support scaling.
  • Before you proceed with the vertical scaling procedure for ndbmgmd, perform a Helm Test and ensure that all the cnDBTier services are running smoothly.

Updating CPU and RAM

Perform the following steps to update CPU and RAM using the custom_values.yaml file:
  1. Configure the required CPU and memory values in the global section of the custom_values.yaml file.

    For example:

    mgm:
      ...
      resources:
        limits:
          cpu: 6
          memory: 12Gi
          ephemeral-storage: 1Gi
        requests:
          cpu: 6
          memory: 12Gi
          ephemeral-storage: 90Mi
  2. Upgrade cnDBTier by performing a Helm upgrade with the modified custom_values.yaml file. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Updating PVC

Updating PVC Using Helm Upgrade
Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for ndbmgmd pods:
    global:
      mgm:
        ndbdisksize: 10Gi
  2. Delete the ndbmgmd statefulset and make the dependencies as orphan:
    $ kubectl -n occne-cndbtier delete sts --cascade=orphan ndbmgmd
  3. Delete the ndbmgmd-1 pod and patch its PVCs with the new PVC value:
    1. Delete the ndbmgmd-1 pod:
      $ kubectl -n occne-cndbtier delete pod ndbmgmd-1

      Sample output:

      pod "ndbmgmd-1" deleted

    2. Patch the PVCs of ndbmgmd-1 pod with the new PVC values:
      $ kubectl -n occne-cndbtier patch -p '{ "spec": { "resources": { "requests": { "storage": "10Gi" }}}}' pvc pvc-ndbmgmd-ndbmgmd-1
      
  4. Wait for the new PV to attach with the PVC. Run the following command to check if the PV reflects the updated PVC values:
    $ kubectl get pv | grep -w occne-cndbtier | grep ndbmgmd-1
    Sample output:
    pvc-8a58a7b4-25bb-4b32-a07a-728h8d792835   10Gi        RWO            Delete           Bound         occne-cndbtier/pvc-ndbmgmd-ndbmgmd-1
  5. Upgrade cnDBTier with the modified custom_values.yaml file:
    $ helm upgrade mysql-cluster occndbtier -f occndbtier/custom_values.yaml -n occne-cndbtier --no-hooks
    
  6. Repeat steps b through e for ndbmgmd-0.
  7. As cnDBTier Helm upgrade is performed with the "--no-hooks" option in step e, upgradeStrategy of the cnDBTier StatefulSets is changed to OnDelete. Therefore, perform the cnDBTier upgrade one more time to restore upgradeStrategy to RollingRestart:
    helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml
    
Updating PVC Using dbtscale_vertical_pvc
dbtscale_vertical_pvc is an automated script to update PVC without manual intervention. Perform the following steps to update the PVC value using custom_values.yaml:
  1. Update the custom_values.yaml file with the new PVC values for the ndbmgmd pods. Ensure that you modify the global.mgm.ndbdisksize section only.

    For example:

    The following code block shows the old PVC values in the custom_values.yaml file:
    global:
      mgm:
        ndbdisksize: 2Gi
    The following code block shows the updated PVC values in the custom_values.yaml file:
    global:
      mgm:
        ndbdisksize: 3Gi
  2. Run the following commands to navigate to the Artifacts/Scripts/tools/ directory and source the source_me file:

    Note:

    • Ensure that the Artifacts/Scripts/tools/ directory contains the source_me file.
    • After running the script, enter the namespace of the cnDBTier cluster when prompted.
    $ cd Artifacts/Scripts/tools/
    $ source ./source_me
    Sample output:
    Enter cndbtier namespace: occne-cndbtier
    DBTIER_NAMESPACE = "occne-cndbtier"
    
    DBTIER_LIB=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/lib
    
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin to PATH
    Adding /home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts to PATH
    PATH=/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin/rollbackscripts:/home/dbtuser/occne/cluster/site-1/Artifacts/Scripts/tools/bin:/home/dbtuser/.local/bin:/home/dbtuser/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
    
  3. Run the dbtscale_vertical_pvc script with the –pods=ndbmgmd option and pass the helm charts and the updated custom_values.yaml file as parameters:
    $ dbtscale_vertical_pvc --pods=ndbmgmd occndbtier custom_values_scale_pvc.yaml 
    Sample output:
    2024-12-17T22:07:40Z INFO - HELM_CHARTS = occndbtier
    2024-12-17T22:07:40Z INFO - CUSTOM_VALUES_FILE = custom_values_scale_pvc.yaml
    dbtscale_vertical_pvc <24.3.0>
    Copyright (c) 2024 Oracle and/or its affiliates. All rights reserved.
    2024-12-17T22:07:40Z INFO - Started timer dbtscale_vertical_pvc: 1734473260
    2024-12-17T22:07:40Z INFO - Started timer PHASE 0: 1734473260
    2024-12-17T22:07:40Z INFO - ****************************************************************************************************
    2024-12-17T22:07:40Z INFO - BEGIN PHASE 0: Collect Site information
    2024-12-17T22:07:40Z INFO - ****************************************************************************************************
    2024-12-17T22:07:40Z INFO - Using IPv4: LOOPBACK_IP="127.0.0.1"
    2024-12-17T22:07:40Z INFO - DBTIER_NAMESPACE = occne-cndbtier
    ...
    2024-12-17T22:07:51Z INFO - PODS_TO_SCALE="ndbmgmd"
    2024-12-17T22:07:51Z INFO - POD_TYPE="mgm"
    2024-12-17T22:07:51Z INFO - STS_TO_DELETE="ndbmgmd"
    2024-12-17T22:07:51Z INFO - PODS_TO_RESTART="MGM_PODS"
    2024-12-17T22:07:51Z INFO - REPL_LEADER_DEPLOY = mysql-cluster-site-1-site-2-replication-svc
    2024-12-17T22:07:51Z INFO - REPL_LEADER_PVC = pvc-site-1-site-2-replication-svc
    2024-12-17T22:07:51Z INFO - REPL_LEADER_POD = mysql-cluster-site-1-site-2-replication-svc-6f549c46qt9
    2024-12-17T22:07:51Z INFO - ****************************************************************************************************
    2024-12-17T22:07:51Z INFO - END PHASE 0: Collect Site information
    2024-12-17T22:07:51Z INFO - ****************************************************************************************************
    2024-12-17T22:07:51Z INFO - Ended timer PHASE 0: 1734473271
    2024-12-17T22:07:51Z INFO - PHASE 0 took: 00 hr. 00 min. 11 sec.
    2024-12-17T22:07:51Z INFO - Started timer PHASE 1: 1734473271
    2024-12-17T22:07:51Z INFO - ****************************************************************************************************
    2024-12-17T22:07:51Z INFO - BEGIN PHASE 1: Verify disk sizes are supported
    2024-12-17T22:07:51Z INFO - ****************************************************************************************************
    2024-12-17T22:07:51Z INFO - Current PVC sizes:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmgmd-ndbmgmd-0                       Bound    pvc-b718e19d-c90a-4b5d-8d92-5c6f528d3643   2Gi        RWO            standard       <unset>                 40m
    pvc-ndbmgmd-ndbmgmd-1                       Bound    pvc-f5de0e40-170e-4e34-9ee0-7166466554ff   2Gi        RWO            standard       <unset>                 40m
    2024-12-17T22:07:51Z INFO - Requested PVC sizes from custom_values_scale_pvc.yaml:
      mgm:
        ndbdisksize: 3Gi
    2024-12-17T22:07:51Z INFO - Current and requested disk values:
    2024-12-17T22:07:51Z INFO - current_pvcsz=2Gi (2147483648), requested_pvcsz=3Gi (3221225472)
    2024-12-17T22:07:51Z INFO - Verifying current and requested disk sizes...
    2024-12-17T22:07:51Z INFO - Requested PVC values should not be equal to current - PASSED
    2024-12-17T22:07:51Z INFO - No requested value should be smaller than current - PASSED
    2024-12-17T22:07:51Z INFO - No requested value should be zero or negative - PASSED
    2024-12-17T22:07:51Z INFO - At least one requested value should be larger than its current value - PASSED
    2024-12-17T22:07:51Z INFO - db-replication-svc requested PVC values should equal to current - PASSED
    2024-12-17T22:07:52Z INFO - ndbmysqld requested PVC values should equal to current - PASSED
    2024-12-17T22:07:52Z INFO - ndbappmysqld requested PVC values should equal to current - PASSED
    2024-12-17T22:07:52Z INFO - ndbmtd requested PVC values should equal to current - PASSED
    2024-12-17T22:07:52Z INFO - Verified current and requested disk sizes.
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - END PHASE 1: Verify disk sizes are supported
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - Ended timer PHASE 1: 1734473272
    2024-12-17T22:07:52Z INFO - PHASE 1 took: 00 hr. 00 min. 01 sec.
    2024-12-17T22:07:52Z INFO - Started timer PHASE 2: 1734473272
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - BEGIN PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - Deleting STS...
    2024-12-17T22:07:52Z INFO - kubectl -n occne-cndbtier delete sts --cascade=orphan "ndbmgmd"
    statefulset.apps "ndbmgmd" deleted
    2024-12-17T22:07:52Z INFO - STS deleted
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - END PHASE 2: Delete/scale down statefulset/deployment
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - Ended timer PHASE 2: 1734473272
    2024-12-17T22:07:52Z INFO - PHASE 2 took: 00 hr. 00 min. 00 sec.
    2024-12-17T22:07:52Z INFO - Started timer PHASE 3: 1734473272
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - BEGIN PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T22:07:52Z INFO - ****************************************************************************************************
    2024-12-17T22:07:52Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 22:07:52 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 15
    2024-12-17T22:07:55Z INFO - Helm upgrade returned
    2024-12-17T22:07:55Z INFO - ****************************************************************************************************
    2024-12-17T22:07:56Z INFO - END PHASE 3: Upgrade cnDBTier --no-hooks or patch db-replication-svc leader PVC
    2024-12-17T22:07:56Z INFO - ****************************************************************************************************
    2024-12-17T22:07:56Z INFO - Ended timer PHASE 3: 1734473276
    2024-12-17T22:07:56Z INFO - PHASE 3 took: 00 hr. 00 min. 04 sec.
    2024-12-17T22:07:56Z INFO - Started timer PHASE 4: 1734473276
    2024-12-17T22:07:56Z INFO - ****************************************************************************************************
    2024-12-17T22:07:56Z INFO - BEGIN PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T22:07:56Z INFO - ****************************************************************************************************
    2024-12-17T22:07:56Z INFO - Restart pods after deleting their PVCs...
    2024-12-17T22:07:56Z INFO - pods_to_delete = (ndbmgmd-0 ndbmgmd-1)
    2024-12-17T22:07:56Z INFO - Deleting PVCs for ndbmgmd-0 (pvc-ndbmgmd-ndbmgmd-0)...
    2024-12-17T22:07:56Z INFO - deleting pod ndbmgmd-0 ...
    persistentvolumeclaim "pvc-ndbmgmd-ndbmgmd-0" deleted
    pod "ndbmgmd-0" deleted
    2024-12-17T22:08:07Z INFO - Waiting for condition: is_pod ndbmgmd-0...
    2024-12-17T22:08:07Z INFO - Condition occurred
    2024-12-17T22:08:29Z INFO - Waiting for condition: is_pod_ready ndbmgmd-0...
    2024-12-17T22:08:29Z INFO - Condition occurred
    2024-12-17T22:08:29Z INFO - Deleting PVCs for ndbmgmd-1 (pvc-ndbmgmd-ndbmgmd-1)...
    2024-12-17T22:08:29Z INFO - deleting pod ndbmgmd-1 ...
    persistentvolumeclaim "pvc-ndbmgmd-ndbmgmd-1" deleted
    pod "ndbmgmd-1" deleted
    2024-12-17T22:08:37Z INFO - Waiting for condition: is_pod ndbmgmd-1...
    2024-12-17T22:08:37Z INFO - Condition occurred
    2024-12-17T22:08:59Z INFO - Waiting for condition: is_pod_ready ndbmgmd-1...
    2024-12-17T22:08:59Z INFO - Condition occurred
    2024-12-17T22:08:59Z INFO - kubectl -n occne-cndbtier get pod ndbmgmd-0 ndbmgmd-1
    NAME        READY   STATUS    RESTARTS   AGE
    ndbmgmd-0   2/2     Running   0          52s
    ndbmgmd-1   2/2     Running   0          22s
    2024-12-17T22:09:00Z INFO - kubectl -n occne-cndbtier get pvc pvc-ndbmgmd-ndbmgmd-0 pvc-ndbmgmd-ndbmgmd-1
    NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmgmd-ndbmgmd-0   Bound    pvc-9bb687b0-3bae-42d5-a3f0-5cf0009cf246   3Gi        RWO            standard       <unset>                 53s
    pvc-ndbmgmd-ndbmgmd-1   Bound    pvc-bc40c80c-39b1-4818-a1db-ef204ff55c1f   3Gi        RWO            standard       <unset>                 23s
    2024-12-17T22:09:00Z INFO - Pods restarted and their PVCs recreated.
    2024-12-17T22:09:00Z INFO - ****************************************************************************************************
    2024-12-17T22:09:00Z INFO - END PHASE 4: Delete PVCs and restart pods or wait for repl PV with new site to bound
    2024-12-17T22:09:00Z INFO - ****************************************************************************************************
    2024-12-17T22:09:00Z INFO - Ended timer PHASE 4: 1734473340
    2024-12-17T22:09:00Z INFO - PHASE 4 took: 00 hr. 01 min. 04 sec.
    2024-12-17T22:09:00Z INFO - Started timer PHASE 5: 1734473340
    2024-12-17T22:09:00Z INFO - ****************************************************************************************************
    2024-12-17T22:09:00Z INFO - BEGIN PHASE 5: Upgrade cnDBTier
    2024-12-17T22:09:00Z INFO - ****************************************************************************************************
    2024-12-17T22:09:00Z INFO - Upgrading cnDBTier (custom_values_scale_pvc.yaml)...
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED: Tue Dec 17 22:09:00 2024
    NAMESPACE: occne-cndbtier
    STATUS: deployed
    REVISION: 16
    2024-12-17T22:11:04Z INFO - Helm upgrade returned
    2024-12-17T22:11:04Z INFO - ****************************************************************************************************
    2024-12-17T22:11:04Z INFO - END PHASE 5: Upgrade cnDBTier
    2024-12-17T22:11:04Z INFO - ****************************************************************************************************
    2024-12-17T22:11:04Z INFO - Ended timer PHASE 5: 1734473464
    2024-12-17T22:11:04Z INFO - PHASE 5 took: 00 hr. 02 min. 04 sec.
    2024-12-17T22:11:04Z INFO - Started timer PHASE 6: 1734473464
    2024-12-17T22:11:04Z INFO - ****************************************************************************************************
    2024-12-17T22:11:04Z INFO - BEGIN PHASE 6: Post-processing
    2024-12-17T22:11:04Z INFO - ****************************************************************************************************
    2024-12-17T22:11:04Z INFO - CURRENT_PVCS:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmgmd-ndbmgmd-0                       Bound    pvc-b718e19d-c90a-4b5d-8d92-5c6f528d3643   2Gi        RWO            standard       <unset>                 40m
    pvc-ndbmgmd-ndbmgmd-1                       Bound    pvc-f5de0e40-170e-4e34-9ee0-7166466554ff   2Gi        RWO            standard       <unset>                 40m
    2024-12-17T22:11:04Z INFO - after_pvcs:
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ndbmgmd-ndbmgmd-0                       Bound    pvc-9bb687b0-3bae-42d5-a3f0-5cf0009cf246   3Gi        RWO            standard       <unset>                 2m57s
    pvc-ndbmgmd-ndbmgmd-1                       Bound    pvc-bc40c80c-39b1-4818-a1db-ef204ff55c1f   3Gi        RWO            standard       <unset>                 2m27s
    2024-12-17T22:11:04Z INFO - ****************************************************************************************************
    2024-12-17T22:11:04Z INFO - END PHASE 6: Post-processing
    2024-12-17T22:11:04Z INFO - ****************************************************************************************************
    2024-12-17T22:11:04Z INFO - Ended timer PHASE 6: 1734473464
    2024-12-17T22:11:04Z INFO - PHASE 6 took: 00 hr. 00 min. 00 sec.
    2024-12-17T22:11:04Z INFO - Ended timer dbtscale_vertical_pvc: 1734473464
    2024-12-17T22:11:04Z INFO - dbtscale_vertical_pvc took: 00 hr. 03 min. 24 sec.
    2024-12-17T22:11:04Z INFO - dbtscale_vertical_pvc completed successfully

7.4 Managing TLS

This section provides the procedures to enable TLS an modify cnDBTier certificates that are used to establish an encrypted connection for georeplication and for communication with NFs.

7.4.1 Managing TLS for Georeplication

This section provides the procedures to enable TLS for georeplication and modify cnDBTier certificates that are used to establish an encrypted connection between georeplication sites.

7.4.1.1 Enabling TLS for Georeplication

Perform the following steps to enable TLS for georeplication between cnDBTier sites.

  1. Create all the required secrets using the "Creating Secret for TLS Certificates" procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  2. Enable TLS feature in the custom_values.yaml file:
    global:
      tls:    
        enable: true
  3. Provide all the required certificates (such as, ca certificate, client certificate, and server certificate) for the respective ndbmysqld pods in the custom_values.yaml file:
    tls:
      enable: true
      caCertificate: "<ca certificate file name>"
      tlsversion: "TLSv1.3"
      tlsMode: "<TLS mode>"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"      
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
        - name: ndbmysqld-1
          serverCertificate: "<server certificate name>"
          serverCertificateKey: "<server key name>"      
          clientCertificate: "<client certificate name>"
          clientCertificateKey: "<client key name>"
    For example:
    tls:
      enable: true
      caCertificate: "combine-ca.pem"
      tlsversion: "TLSv1.3"
      tlsMode: "VERIFY_CA"
      ciphers:
        - TLS_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_CHACHA20_POLY1305_SHA256
        - TLS_AES_128_CCM_SHA256
      certificates:
        - name: ndbmysqld-0
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"      
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
        - name: ndbmysqld-1
          serverCertificate: "server1-cert.pem"
          serverCertificateKey: "server1-key.pem"      
          clientCertificate: "client1-cert.pem"
          clientCertificateKey: "client1-key.pem"
  4. Follow the installation procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide to perform a Helm install with the updated custom_values.yaml file.
7.4.1.2 Modifying cnDBTier Certificates to Establish TLS Between Georeplication Sites
This section provides the procedure to modify cnDBTier certificates to establish an encrypted connection between georeplication sites using TLS.

Note:

  • If you update cnDBTier certificates in one site, ensure that you update the relevant Certificate Authority (CA) certificates in all the mate sites. This ensures that the new or switchover replication doesn't break due to incorrect certificates.
  • While recreating the secrets, ensure that the file names of the updated certificates are same as the old certificates.
  1. Get the list of existing CA certificates of the cnDBTier cluster.

    Note:

    If the file name has a dot either in the extension or in its name, use an escape character \ before the dot.
    $ kubectl get secret  cndbtier-trust-store-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<ca_certificate_file_name>}" | base64 --decode
    For example:
    $ kubectl get secret  cndbtier-trust-store-secret  -n occne-cndbtier -o jsonpath="{.data.ca\.pem}" | base64 --decode
  2. If you want to modify the secret of the existing CA certificates in the cnDBTier cluster, delete the secret using the following command. Otherwise, skip this step and move to Step 4.
    $ kubectl -n <namespace> delete secret cndbtier-trust-store-secret
    where, <namespace> is the name of the cnDBTier namespace.
    For example:
    $ kubectl -n occne-cndbtier delete secret cndbtier-trust-store-secret
  3. If you deleted the secret of the existing CA certificates in the previous step, then create the cndbtier-trust-store-secret secret with the new CA certificates. For more information, see the "Create Secret for TLS Certificates" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-trust-store-secret --from-file=<ca_certificate>
    For example:
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-trust-store-secret --from-file=combine-ca.pem
  4. Perform the following steps to get the existing server certificate and the server key of the cnDBTier cluster.

    Note:

    If the file names have dots either in the extension or in its name, use an escape character \ before the dots.
    1. Run the following command to get the server certificate:
      $ kubectl get secret  cndbtier-server-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<server_certificate_file_name>}" | base64 --decode
      For example:
      $ kubectl get secret  cndbtier-server-secret  -n occne-cndbtier -o jsonpath="{.data.server-cert\.pem}" | base64 --decode
    2. Run the following command to get the server certificate key:
      $ kubectl get secret  cndbtier-server-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<server_certificate_key_file_name>}" | base64 --decode
      For example:
      $ kubectl get secret  cndbtier-server-secret  -n occne-cndbtier -o jsonpath="{.data.server-key\.pem}" | base64 --decode
  5. If you want to modify the secret of the existing server certificate and server key secret, delete the secret using the following command. Otherwise, skip this step and move to Step 7.
    $ kubectl -n <cndbtier_namespace> delete secret cndbtier-server-secret
    For example:
    $ kubectl -n occne-cndbtier delete secret cndbtier-server-secret
  6. If you deleted the secret of the existing server certificate and server key in the previous step, then create the cndbtier-server-secret secret with the new server certificate and server key. For more information, see the "Creating Secrets for TLS Certificates" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-server-secret --from-file=<server_certificate> --from-file=<server_certificate_key>
    For example:
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-server-secret --from-file=server-cert.pem --from-file=server-key.pem
  7. [Optional] Perform the following steps to get the existing client certificate and client key of the cnDBTier cluster. You can skip this step if you know the list of certificates.

    Note:

    If the file names have dots either in the extension or in its name, use an escape character \ before the dots.
    1. Run the following command to get the client certificate:
      $ kubectl get secret  cndbtier-client-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<client_certificate_file_name>}" | base64 --decode
      For example:
      $ kubectl get secret  cndbtier-client-secret  -n occne-cndbtier -o jsonpath="{.data.client-cert\.pem}" | base64 --decode
    2. Run the following command to get the client certificate key:
      $ kubectl get secret  cndbtier-client-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<client_certificate_key_file_name>}" | base64 --decode
      For example:
      $ kubectl get secret  cndbtier-client-secret  -n occne-cndbtier -o jsonpath="{.data.client-key\.pem}" | base64 --decode
  8. If you want to modify the secret of the existing client certificate and client key secret, delete the secret using the following command. Otherwise, skip this step and move to Step 10.
    $ kubectl -n <cndbtier_namespace> delete secret cndbtier-client-secret
    For example:
    $ kubectl -n occne-cndbtier delete secret cndbtier-client-secret
  9. If you deleted the secret of the existing client certificate and client key in the previous step, then create the cndbtier-client-secret secret with the new client certificate and client key. For more information, see the "Creating Secrets for TLS Certificates" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-client-secret --from-file=<client_certificate> --from-file=<client_certificate_key>
    For example:
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-client-secret --from-file=client-cert.pem --from-file=client-key.pem
  10. If you modified the CA certificates in the current site, then you must update the certificates in other sites too. Perform Steps 1 through 9 to modify the relevant certificates on all the mate sites.
  11. Wait for the new certificates of the recreated secret to be automatically mounted to the replication SQL pods by the Kubernetes API server.

    Note:

    This could take around 30 seconds. However, the wait time depends on the Kubernetes configuration and varies from environment to environment.
  12. Perform the following steps to reload the TLS context on each replication SQL pod in the cnDBTier cluster:
    1. Check the number of replication SQL pods present in the cnDBTier cluster:
      $ kubectl get pods --namespace=<namespace of cnDBTier cluster> | grep ndbmysql
      For example:
      $ kubectl get pods --namespace=occne-cndbtier | grep ndbmysql
      Sample output:
      ndbmysqld-0                                                    3/3     Running   0          103m
      ndbmysqld-1                                                    3/3     Running   0          4h39m
      ndbmysqld-2                                                    3/3     Running   0          4h37m
      ndbmysqld-3                                                    3/3     Running   0          4h36m
      ndbmysqld-4                                                    3/3     Running   0          4h34m
      ndbmysqld-5                                                    3/3     Running   0          4h32m
    2. Reload the TLS context on each replication SQL pod:
      $ kubectl exec -it <replication SQL pod> --namespace=<namespace of cnDBTier cluster> -- mysql -h127.0.0.1 -uroot -p<root password> -e "ALTER INSTANCE RELOAD TLS;"  
      For example:
      • Run the following command to reload the TLS context on ndbmysqld-0:
        $ kubectl exec -it ndbmysqld-0 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
      • Run the following command to reload the TLS context on ndbmysqld-1:
        $ kubectl exec -it ndbmysqld-1 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
      • Run the following command to reload the TLS context on ndbmysqld-2:
        $ kubectl exec -it ndbmysqld-2 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
      • Run the following command to reload the TLS context on ndbmysqld-3:
        $ kubectl exec -it ndbmysqld-3 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
      • Run the following command to reload the TLS context on ndbmysqld-4:
        $ kubectl exec -it ndbmysqld-4 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
      • Run the following command to reload the TLS context on ndbmysqld-5:
        $ kubectl exec -it ndbmysqld-5 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
  13. Repeat Step 12 on all the mate sites where you have updated the certificates.
  14. Perform the following steps to run a rollout restart of the ndbmysqld pods in the cnDBTier cluster:
    1. Run the following command to get the statefulset name of the ndbmysqld pod:
      $ kubectl get statefulset --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl get statefulset --namespace=occne-cndbtier
      Sample output:
      NAME           READY   AGE
      ndbappmysqld   2/2     27h
      ndbmgmd        2/2     27h
      ndbmtd         2/2     27h
      ndbmysqld      2/2     27h
    2. Perform a rollout restart of the ndbmsqld pod:
      $ kubectl rollout restart statefulset <statefulset name of ndbmysqld> --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl rollout restart statefulset ndbmysqld  --namespace=occne-cndbtier
      Sample output:
      statefulset.apps/ndbmysqld restarted
    3. Wait for the rollout restart of the ndbmysqld pod to complete and check the status:
      $ kubectl rollout status statefulset <statefulset name of ndbmysqld> --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl rollout status statefulset ndbmysqld  --namespace=occne-cndbtier
      Sample output:
      Waiting for 1 pods to be ready...
      waiting for statefulset rolling update to complete 1 pods at revision ndbmysqld-7c6b9c9f84...
      Waiting for 1 pods to be ready...
      Waiting for 1 pods to be ready...
      statefulset rolling update complete 2 pods at revision ndbmysqld-7c6b9c9f84...
  15. Repeat Step 14 on all the mate sites where you have updated the certificates.

7.4.2 Managing TLS for Communication with NFs

This section provides the procedures to enable TLS for communication with NFs and modify cnDBTier certificates that are used to establish a TLS connection between cnDBTier and NFs.

7.4.2.1 Enabling TLS for Communication with NFs

Perform the following steps to enable TLS for communication between cnDBTier and NFs.

  1. Create all the required secrets using the "Creating Secret for TLS Certificates" procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  2. Set the global/ndbappTLS/enable parameter to true in the custom_values.yaml file to enable the TLS feature for NF communication:
    global:
      ndbappTLS:    
        enable: true
  3. Provide all the required certificates (such as, ca certificate, client certificate, and server certificate) for the application SQL pods in the custom_values.yaml file for the cnDBTier site:
    ndbappTLS:
      enable: true
      caSecret: cndbtier-ndbapp-trust-store-secret
      serverSecret: cndbtier-ndbapp-server-secret
      tlsversion: "TLSv1.3"
      caCertificate: "<ca certificate file name>"
      serverCertificate: "<server certificate name>"
      serverCertificateKey: "<server key name>"
    For example:
    ndbappTLS:
      enable: true
      caSecret: cndbtier-ndbapp-trust-store-secret
      serverSecret: cndbtier-ndbapp-server-secret
      tlsversion: "TLSv1.3"
      caCertificate: "combine-ca.pem"
      serverCertificate: "server1-cert.pem"
      serverCertificateKey: "server1-key.pem"
  4. Follow the installation procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide to perform a Helm install with the updated custom_values.yaml file.
7.4.2.2 Modifying cnDBTier Certificates to Establish TLS for Communication with NFs
This section provides the procedure to modify cnDBTier certificates to establish an encrypted connection between cnDBTier and NFs using TLS.

Note:

  • If you update the cnDBTier certificates, ensure that you update the relevant Certificate Authority (CA) certificates on all relevant NFs. This ensures that NF traffic to cnDBTier is not disrupted due to incorrect certificates.
  • While recreating the secrets, ensure that the file names of the updated certificates are same as the old certificates.
  1. Get the list of the existing CA certificates that are configured for the application SQL pod of the cnDBTier cluster:

    Note:

    If the file name has a dot either in the extension or in its name, use an escape character \ before the dot.
    $ kubectl get secret cndbtier-ndbapp-trust-store-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<ca_certificate_file_name>}" | base64 --decode
    For example:
    $ kubectl get secret cndbtier-ndbapp-trust-store-secret  -n occne-cndbtier -o jsonpath="{.data.ca\.pem}" | base64 --decode
  2. If you want to modify the secret of the existing CA certificates in the cnDBTier cluster, delete the existing CA certificate(s) secret configured for the application SQL pod. Otherwise, skip this step and move to Step 4.
    $ kubectl -n <namespace> delete secret cndbtier-ndbapp-trust-store-secret
    where, <namespace> is the name of the cnDBTier namespace.
    For example:
    $ kubectl -n occne-cndbtier delete secret cndbtier-ndbapp-trust-store-secret
  3. If you deleted the secret of the existing CA certificates in the previous step, then create the cndbtier-ndbapp-trust-store-secret secret with the new CA certificates. For more information, see the "Create Secret for TLS Certificates" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-ndbapp-trust-store-secret --from-file=<ca_certificate>
    For example:
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-ndbapp-trust-store-secret --from-file=combine-ca.pem
  4. Perform the following steps to get the existing server certificate and the server key of the cnDBTier cluster.

    Note:

    If the file names have dots either in the extension or in its name, use an escape character \ before the dots.
    1. Run the following command to get the server certificate:
      $ kubectl get secret  cndbtier-ndbapp-server-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<server_certificate_file_name>}" | base64 --decode
      
      For example:
      $ kubectl get secret  cndbtier-ndbapp-server-secret  -n occne-cndbtier -o jsonpath="{.data.server-cert\.pem}" | base64 --decode
      
    2. Run the following command to get the server certificate key:
      $ kubectl get secret  cndbtier-ndbapp-server-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.<server_certificate_key_file_name>}" | base64 --decode
      
      For example:
      $ kubectl get secret  cndbtier-ndbapp-server-secret  -n occne-cndbtier -o jsonpath="{.data.server-key\.pem}" | base64 --decode
      
  5. If you want to modify the secret of the existing server certificate and server key secret, delete the secret using the following command. Otherwise, skip this step and move to Step 7.
    $ kubectl -n <cndbtier_namespace> delete secret cndbtier-ndbapp-server-secret
    
    For example:
    $ kubectl -n occne-cndbtier delete secret cndbtier-ndbapp-server-secret
    
  6. If you deleted the secret of the existing server certificate and server key in the previous step, then create the cndbtier-ndbapp-server-secret secret with the new server certificate and server key. For more information, see the "Creating Secrets for TLS Certificates" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-ndbapp-server-secret --from-file=<server_certificate> --from-file=<server_certificate_key>
    For example:
    $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-ndbapp-server-secret --from-file=server-cert.pem --from-file=server-key.pem
  7. If you modified the CA certificates in the current site, then you must update the certificates in other NFs too. Perform Steps 1 through 6 to modify the relevant certificates on all relevant NFs.
  8. Wait for the new certificates of the recreated secret to be automatically mounted to the application SQL pods by the Kubernetes API server.

    Note:

    This could take around 30 seconds. However, the wait time depends on the Kubernetes configuration and varies from environment to environment.
  9. Perform the following steps to reload the TLS context on each application SQL pod in the cnDBTier cluster:
    1. Check the number of application SQL pods present in the cnDBTier cluster:
      $ kubectl get pods --namespace=<namespace of cnDBTier cluster> | grep ndbappmysql
      For example:
      $ kubectl get pods --namespace=occne-cndbtier | grep ndbmysql
      Sample output:
      ndbappmysqld-0                                                    3/3     Running   0          103m
      ndbappmysqld-1                                                    3/3     Running   0          4h39m
    2. Reload the TLS context on each application SQL pod:
      $ kubectl exec -it <APP SQL pod> --namespace=<namespace of cnDBTier cluster> -- mysql -h127.0.0.1 -uroot -p<root password> -e "ALTER INSTANCE RELOAD TLS;"
      For example:
      • Run the following command to reload the TLS context on ndbappmysqld-0:
        $ kubectl exec -it ndbappmysqld-0 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
      • Run the following command to reload the TLS context on ndbappmysqld-1:
        $ kubectl exec -it ndbappmysqld-1 --namespace=occne-cndbtier -- mysql -h127.0.0.1 -uroot -pNextGenCne -e "ALTER INSTANCE RELOAD TLS;"
        Sample output:
        Defaulted container "mysqlndbcluster" out of: mysqlndbcluster, init-sidecar, db-infra-monitor-svc
        mysql: [Warning] Using a password on the command line interface can be insecure.
  10. Perform the following steps to run a rollout restart of the ndbappmysqld pods in the cnDBTier cluster:
    1. Run the following command to get the statefulset name of the ndbappmysqld pod:
      $ kubectl get statefulset --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl get statefulset --namespace=occne-cndbtier
      Sample output:
      NAME           READY   AGE
      ndbappmysqld   2/2     27h
      ndbmgmd        2/2     27h
      ndbmtd         2/2     27h
      ndbmysqld      2/2     27h
    2. Perform a rollout restart of the ndbappmysqld pod:
      $ kubectl rollout restart statefulset <statefulset name of ndbappmysqld> --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl rollout restart statefulset ndbappmysqld  --namespace=occne-cndbtier
      Sample output:
      statefulset.apps/ndbappmysqld restarted
    3. Wait for the rollout restart of the ndbappmysqld pod to complete and check the status:
      $ kubectl rollout status statefulset <statefulset name of ndbappmysqld> --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl rollout status statefulset ndbappmysqld  --namespace=occne-cndbtier
      Sample output:
      Waiting for 1 pods to be ready...
      waiting for statefulset rolling update to complete 1 pods at revision ndbappmysqld-7c6b9c9f84...
      Waiting for 1 pods to be ready...
      Waiting for 1 pods to be ready...
      statefulset rolling update complete 2 pods at revision ndbappmysqld-7c6b9c9f84...

7.5 Adding a Georedundant cnDBTier Cluster

This section describes the procedures to add a cnDBTier georedundant cluster to an existing single site, two site, and three site georedundant cnDBTier clusters.

Note:

  • If the existing Georedundant cnDBTier cluster is configured with single replication channel, then the new cnDBTier cluster being added by following this procedure must also be configured with a single replication channel.
  • If the existing Georedundant cnDBTier cluster is configured with multiple replication channels, then the new cnDBTier cluster being added by following this procedure must also be configured with multiple replication channel groups.
The procedures in this section use the following terms to identify the clusters:
  1. cnDBTier Cluster1 (a.k.a. cluster1): The first cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
  2. cnDBTier Cluster2 (a.k.a. cluster2): The second cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
  3. cnDBTier Cluster3 (a.k.a. cluster3): The third cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
  4. cnDBTier Cluster4 (a.k.a. cluster4): The fourth cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
Prerequisites
  1. All the cnDBTier data nodes and SQL nodes that participate in georeplication, and at least one management node in the existing clusters must be up and running.
  2. Georeplication must be established between the existing cnDBTier clusters.
  3. All the cnDBTier clusters must be installed using cnDBTier 22.1.x or above.
  4. All the cnDBTier clusters must have the same number of data nodes and node groups.

7.5.1 Adding cnDBTier Georedundant Cluster to Single Site cnDBTier Cluster

This section describes the procedure to add a cnDBTier georedundant cluster (cnDBTier cluster2) to an existing single site cnDBTier cluster (cnDBTier cluster1).

  1. If TLS was configured on cnDBTier cluster1, then follow the Modifying cnDBTier Certificates to Establish TLS Between Georeplication Sites procedure to update the certificates in cnDBTier cluster1.

    Note:

    Perform this step on cnDBTier cluster1 only if you want to reconfigure the CA certificates for the new cnDBTier cluster.
  2. Check if georeplication is enabled in cnDBTier cluster1 by performing the following steps:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check if georeplication is enabled in cnDBTier cluster1 for every replication group.
    1. Ensure that the DB replication service in cnDBTier cluster1 is enabled and running successfully with respect to cnDBTier cluster2.
      $ kubectl -n cluster1 get pods | grep repl
      Sample output:
      mysql-cluster-cluster1-cluster2-replication-svc-5bdc46crzzhb   1/1     Running   0          3m11s
    2. Ensure that at least two georeplication SQL nodes are configured in cluster1.
      $ kubectl -n cluster1 get pods | grep ndbmysqld
      Sample output:
      ndbmysqld-0                    3/3     Running   0          4m14s
      ndbmysqld-1                    3/3     Running   0          3m53s
    3. Check the status of cnDBTier cluster1.
      $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186 Cluster Configuration
      ---------------------
      [ndbd(NDB)]     2 node(s)
      id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
        
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
        
      [mysqld(API)]   8 node(s)
      id=56   @10.233.114.34   (mysql-8.4.3 ndb-8.4.3)
      id=57   @10.233.85.91    (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.85.93    (mysql-8.4.3 ndb-8.4.3)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host)   

      Note:

      • If cluster1 satisfies all the conditions in Step 1, then georeplication is enabled in cnDBTier cluster1. Otherwise, georeplication is not enabled in cluster1.
      • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for georeplication recovery. You can ignore these nodes. This note is applicable to all similar outputs in this procedure.
  3. If georeplication is not enabled in cluster1 as per Step 1, then perform the following steps to enable georeplication in cluster1. Else, skip to Step 3.

    Note:

    • This step is not used to enable Multi Replication Channel Group on the existing cnDBTier cluster1. Multi replication channel groups with the required number of channel groups must be already configured on cnDBTier cluster1.
    • Do not run a Helm test when performing a conversion procedure.
    1. Configure the remote mate site name (cluster2) in cnDBTier cluster1 by enabling and configuring the replication service using the custom_values.yaml file.
      Example:
      $ vi occndbtier/custom_values.yaml
      Sample output:
      dbreplsvcdeployments:
          # if pod prefix is given then use the unique smaller name for this db replication service.
          - name: cluster1-cluster2-replication-svc
            enabled: true
            .....
            .....
            .....
            replication:
              # Local site replication service LoadBalancer ip can be configured.
              localsiteip: ""
              matesitename: "cluster2"
              remotesiteip: ""
    2. Configure the number of SQL pods in cnDBTier cluster1 to at least two in the custom_values.yaml file.
      Example with output:
      $ vi occndbtier/custom_values.yaml
      apiReplicaCount: 2
    3. Upgrade cnDBTier cluster1 by using the CSAR package.
      $ helm upgrade mysql-cluster occndbtier --namespace cluster1 -f occndbtier/custom_values.yaml
      Sample output:
      Release "mysql-cluster" has been upgraded. Happy Helming!
      NAME: mysql-cluster
      LAST DEPLOYED:  Mon May 20 11:10:32 2025
      NAMESPACE: cluster1
      STATUS: deployed
      REVISION: 2
    4. Wait until the upgrade is complete and all the pods of cnDBTier cluster 1 are restarted. Verify the status of the cluster by performing the following steps:
      1. Check the status of pods running cluster1:
        $ kubectl get pods --namespace=cluster1
        Sample output:
        NAME                                                              READY   STATUS    RESTARTS       AGE
        mysql-cluster-cluster1-cluster2-replication-svc-7676cc7bd62zssl   1/1     Running   0              153m
        mysql-cluster-db-backup-manager-svc-c77df67b7-tqnd2               1/1     Running   0              153m
        mysql-cluster-db-monitor-svc-69ff969477-646tz                     1/1     Running   0              147m
        ndbappmysqld-0                                                    2/2     Running   0              148m
        ndbappmysqld-1                                                    2/2     Running   0              147m
        ndbmgmd-0                                                         1/1     Running   0              152m
        ndbmgmd-1                                                         1/1     Running   0              152m
        ndbmtd-0                                                          2/2     Running   0              150m
        ndbmtd-1                                                          2/2     Running   0              151m
        ndbmysqld-0                                                       2/2     Running   0              147m
        ndbmysqld-1                                                       2/2     Running   0              147m
      2. Check the status of cnDBTier cluster1:
        $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
          
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
          
        [mysqld(API)]   8 node(s)
        id=56   @10.233.114.34  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.85.91  (mysql-8.4.3 ndb-88.4.3)
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)
        

        Note:

        Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. You can ignore these node IDs.
  4. Install cnDBTier cluster2 by configuring cnDBTier cluster1 as mate site with at least two SQL pods. For information on installing a cnDBTier cluster, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    • If multiple replication channel groups are enabled, then install cnDBTier cluster2 by configuring cnDBTier cluster1 as mate site with at least four SQL pods.
    • Do not run a Helm test when performing the conversion procedure.
    • If you are going to enable encryption, ensure that the new site uses the same encryption key.
  5. Check if the georeplication channels are established between cnDBTier cluster1 and cnDBTier cluster2 by running the following commands:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multi replication channel groups are enabled, then check if the georeplication channels are established between cnDBTier cluster1 and cnDBTier cluster2 for every replication channel group.
    $ kubectl -n cluster1 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    Sample output:
    +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch  | site_name | server_id | start_ts            | replchannel_group_id |
    +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
    | cluster1         |             1000 |    2005601 | 10.233.51.94        | ACTIVE  |         NULL | cluster2  |      2000 | 2024-02-19 19:10:45 |                    1 |
    | cluster1         |             1001 |    2005702 | 10.233.10.190       | STANDBY |         NULL | cluster2  |      2001 | 2024-02-19 19:10:44 |                    1 |
    | cluster2         |             2000 |    2005601 | 10.233.8.24         | ACTIVE  |         NULL | cluster1  |      1000 | 2024-02-19 19:10:44 |                    1 |
    | cluster2         |             2001 |    2005702 | 10.233.61.239       | STANDBY |         NULL | cluster1  |      1001 | 2024-02-19 19:10:45 |                    1 |
    +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
     
    4 rows in set (0.01 sec)
  6. Restore the cnDBTier cluster1 data in cnDBTier cluster2 and reestablish the georeplication channels by performing the fault recovery procedure. You can perform the georeplication recovery using cnDBtier or CNC Console. For more information, see the Georeplication Failure Between cnDBTier Clusters in Two Site Replication section of Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  7. Verify the replication status in cnDBTier cluster1 and cnDBTier cluster2 by performing the following steps:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then verify the replication status in cnDBTier cluster1 and cnDBTier cluster2 for every replication group.
    1. Log in to the cnDBTier cluster1 Bastion Host and check the replication status in the active replication channel:
      $ kubectl -n cluster1 exec -it ndbmysqld-0 -- bash
      $ mysql -h 127.0.0.1 -uroot -p
      Password:
      mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
      Sample output:
      +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
      | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch  | site_name | server_id | start_ts            | replchannel_group_id |
      +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
      | cluster1         |             1000 |    2005601 | 10.233.51.94        | ACTIVE  |         NULL | cluster2  |      2000 | 2024-02-19 19:10:45 |                    1 |
      | cluster1         |             1001 |    2005702 | 10.233.10.190       | STANDBY |         NULL | cluster2  |      2001 | 2024-02-19 19:10:44 |                    1 |
      | cluster2         |             2000 |    2005601 | 10.233.8.24         | ACTIVE  |         NULL | cluster1  |      1000 | 2024-02-19 19:10:44 |                    1 |
      | cluster2         |             2001 |    2005702 | 10.233.61.239       | STANDBY |         NULL | cluster1  |      1001 | 2024-02-19 19:10:45 |                    1 |
      +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
      4 rows in set (0.01 sec) 
    2. Check if the replication is turned on (that is, Replica_IO_Running and Replica_SQL_Running are set to Yes) in the ACTIVE replication channel.

      Example to check cluster 1:

      $ kubectl -n cluster1 exec -it ndbmysqld-0 -- bash
      $ mysql -h 127.0.0.1 -uroot -p

      Sample output:

      Password:
      mysql> SHOW REPLICA STATUS\G;
      *************************** 1. row ***************************
                   Replica_IO_State: Waiting for master to send event
                        Source_Host: 10.233.0.147
                        Source_User: occnerepluser
                        Source_Port: 3306
                      Connect_Retry: 60
                    Source_Log_File: mysql-bin.000007
                Read_Source_Log_Pos: 16442
                     Relay_Log_File: mysql-relay-bin.000002
                      Relay_Log_Pos: 11203
              Relay_Source_Log_File: mysql-bin.000007
                 Replica_IO_Running: Yes
                Replica_SQL_Running: Yes
                          ....
                          ....
                          ....
          Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
                 Source_Retry_Count: 86400
                          ....
                          ....
       
      Example to check cluster 2:
      $ kubectl -n cluster1 exec -it ndbmysqld-1 -- bash
      $ mysql -h 127.0.0.1 -uroot -p
      Sample output:
      Password:
      mysql> SHOW REPLICA STATUS\G;
      *************************** 1. row ***************************
                   Replica_IO_State:
                        Source_Host: 10.233.9.35
                        Source_User: occnerepluser
                        Source_Port: 3306
                      Connect_Retry: 60
                    Source_Log_File: mysql-bin.000005
                Read_Source_Log_Pos: 26970
                     Relay_Log_File: mysql-relay-bin.000002
                      Relay_Log_Pos: 2228
              Relay_Source_Log_File: mysql-bin.000005
                 Replica_IO_Running: No
                Replica_SQL_Running: No
    3. Log in to the cnDBTier cluster2 Bastion Host and check the replication status in the active replication channel:
      $ kubectl -n cluster2 exec -it ndbmysqld-0 -- bash
      $ mysql -h 127.0.0.1 -uroot -p
      Password:
      mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
      Sample output:
      +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
      | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch  | site_name | server_id | start_ts            | replchannel_group_id |
      +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
      | cluster1         |             1000 |    2005601 | 10.233.51.94        | ACTIVE  |         NULL | cluster2  |      2000 | 2024-02-19 19:10:45 |                    1 |
      | cluster1         |             1001 |    2005702 | 10.233.10.190       | STANDBY |         NULL | cluster2  |      2001 | 2024-02-19 19:10:44 |                    1 |
      | cluster2         |             2000 |    2005601 | 10.233.8.24         | ACTIVE  |         NULL | cluster1  |      1000 | 2024-02-19 19:10:44 |                    1 |
      | cluster2         |             2001 |    2005702 | 10.233.61.239       | STANDBY |         NULL | cluster1  |      1001 | 2024-02-19 19:10:45 |                    1 |
      +------------------+------------------+------------+---------------------+---------+--------------+-----------+-----------+---------------------+----------------------+
      4 rows in set (0.00 sec)  
    4. Check if the replication is turned on (that is, Replica_IO_Running and Replica_SQL_Running are set to Yes) in the ACTIVE replication channel.
      Example for ndbmysqld-0:
      $ kubectl -n cluster2 exec -it ndbmysqld-0 -- bash
      $ mysql -h 127.0.0.1 -uroot -p
      Password:
      mysql> SHOW REPLICA STATUS\G;
      Sample output:
      *************************** 1. row ***************************
                   Replica_IO_State: Waiting for master to send event
                        Source_Host: 10.233.13.41
                        Source_User: occnerepluser
                        Source_Port: 3306
                      Connect_Retry: 60
                    Source_Log_File: mysql-bin.000007
                Read_Source_Log_Pos: 32792
                     Relay_Log_File: mysql-relay-bin.000002
                      Relay_Log_Pos: 17215
              Relay_Source_Log_File: mysql-bin.000007
                 Replica_IO_Running: Yes
                Replica_SQL_Running: Yes
                          ....
                          ....
                          ....
          Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
                 Source_Retry_Count: 86400
                          ....
                          ....
       
      Example for ndbmysqld-1:
      $ kubectl -n cluster2 exec -it ndbmysqld-1 -- bash
      $ mysql -h 127.0.0.1 -uroot -p
      Password:
      mysql> SHOW REPLICA STATUS\G;
      
      Sample output:
      *************************** 1. row ***************************
                   Replica_IO_State:
                        Source_Host: 10.233.39.110
                        Source_User: occnerepluser
                        Source_Port: 3306
                      Connect_Retry: 60
                    Source_Log_File: mysql-bin.000007
                Read_Source_Log_Pos: 9785
                     Relay_Log_File: mysql-relay-bin.000002
                      Relay_Log_Pos: 203
              Relay_Source_Log_File: mysql-bin.000007
                 Replica_IO_Running: No
                Replica_SQL_Running: No

      Note:

      As data is replicated to cnDBTier cluster2 and georeplication channels are established between cnDBTier cluster1 and cnDBTier cluster2, the newly added cnDBTier cluster (cluster2) can also be used by the NFs for database operations.

7.5.2 Adding cnDBTier Georedundant Cluster to Two-Site cnDBTier Clusters

This section describes the procedure to add a cnDBTier georedundant cluster (cnDBTier cluster3) to the existing two-site georedundant cnDBTier clusters (cnDBTier cluster1 and cnDBTier cluster2).

  1. If TLS was configured on cnDBTier cluster1 and cluster 2, then follow the Modifying cnDBTier Certificates to Establish TLS Between Georeplication Sites procedure to update the certificates in cnDBTier cluster1 and cluster 2.

    Note:

    Perform this step on those cnDBTier clusters on which you want to reconfigure the CA certificates for the new cnDBTier cluster.
  2. Check if georeplication is enabled in cnDBTier cluster1 with respect to cnDBTier cluster3 by performing the following steps:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check if georeplication is enabled in cnDBTier cluster1 for every replication group with respect to cnDBTier cluster3.
    1. Ensure that the DB replication service in cnDBTier cluster1 is enabled and running successfully with respect to cnDBTier cluster3:
      $ kubectl -n cluster1 get pods | grep repl
      Sample output:
      mysql-cluster-cluster1-cluster2-replication-svc-5bdc46crzzhb   1/1     Running   0          3m11s
      mysql-cluster-cluster1-cluster3-replication-svc-7955b6b6fbdc   1/1     Running   2          3m09s
    2. Ensure that at least four georeplication SQL nodes are configured in cluster1:
      $ kubectl -n cluster1 get pods | grep ndbmysqld
      Sample output:
      ndbmysqld-0                                                       3/3     Running   0          4m14s
      ndbmysqld-1                                                       3/3     Running   0          3m53s
      ndbmysqld-2                                                       3/3     Running   0          3m21s
      ndbmysqld-3                                                       3/3     Running   0          3m01s
    3. Check the status of cnDBTier cluster1:
      $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186 Cluster Configuration
      ---------------------
      [ndbd(NDB)]     2 node(s)
      id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
         
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
         
      [mysqld(API)]   10 node(s)
      id=56   @10.233.114.34  (mysql-8.4.3 ndb-8.4.3)
      id=57   @10.233.85.91  (mysql-8.4.3 ndb-8.4.3)
      id=58   @10.233.114.34  (mysql-8.4.3 ndb-8.4.3)
      id=59   @10.233.85.91  (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host)

    Note:

    • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for georeplication recovery. You can ignore these nodes.
    • If cluster1 satisfies all the conditions in Step 1, then georeplication is enabled in cnDBTier cluster1. Otherwise, georeplication is not enabled.
  3. If georeplication is not enabled in cluster1 as per Step 1, then perform the following steps to enable georeplication in cnDBTier cluster1 with respect to cnDBTier cluster3. Else, skip to Step 3.

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then configure remote mate site name (cluster3) in cnDBTier cluster1 by following the Configuring Multiple Replication channel groups procedure in \Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide and skip steps a and b in the following substeps.
    1. Configure the remote mate site name (cluster3) in cnDBTier cluster1 by enabling and configuring the replication service using the custom_values.yaml file:
      $ vi occndbtier/custom_values.yaml
      Sample output:
       dbreplsvcdeployments:
          ....
          # if pod prefix is given then use the unique smaller name for this db replication service.
          - name: cluster1-cluster3-replication-svc
            enabled: true
            mysql:
              dbtierservice: "mysql-connectivity-service"
              dbtierreplservice: "ndbmysqldsvc"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-2 LoadBalancer service
              primaryhost: "ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier"
              port: "3306"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-2 LoadBalancer service
              primarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              primaryhostserverid: "1002"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-3 LoadBalancer service
              secondaryhost: "ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-3 LoadBalancer service
              secondarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              secondaryhostserverid: "1003"
            replication:
              # Local site replication service LoadBalancer ip can be configured.        
              localsiteip: ""
              matesitename: "cluster3"
              remotesiteip: ""    
    2. Configure the number of SQL pods in cnDBTier cluster1 to at least four in the custom_values.yaml file:

      Note:

      apiReplicaCount is set to "4" as you are adding the third site. The first two ndbmysqld are used for replication between site one and site two only. The two ndbmysqld pods that are newly added will be responsible for replication between site1 and site3.
      $ vi occndbtier/custom_values.yaml
      Sample output:
      apiReplicaCount: 4
      
    3. Upgrade cnDBTier cluster1 by using the CSAR package. For more information, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

      Note:

      The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then upgrade cnDBTier cluster2 by providing appropriate multi-group values file.
      $ helm upgrade mysql-cluster occndbtier --namespace cluster1 --no-hooks -f occndbtier/custom_values.yaml
      Sample output:
      Release "mysql-cluster" has been upgraded. Happy Helming!
      NAME: mysql-cluster
      LAST DEPLOYED:  Mon May 20 11:10:32 2025
      NAMESPACE: cluster1
      STATUS: deployed
      REVISION: 2
    4. Wait until the upgrade is complete and all the pods of cnDBTier cluster 1 are restarted. Verify the status of the cluster by performing the following steps:
      1. Check the status of pods running cluster1:
        $ kubectl get pods --namespace=cluster1
        Sample output:
        NAME                                                              READY   STATUS    RESTARTS       AGE
        mysql-cluster-cluster1-cluster2-replication-svc-7676cc7bd62zssl   1/1     Running   0              153m
        mysql-cluster-db-backup-manager-svc-c77df67b7-tqnd2               1/1     Running   0              153m
        mysql-cluster-db-monitor-svc-69ff969477-646tz                     1/1     Running   0              147m
        ndbappmysqld-0                                                    2/2     Running   0              148m
        ndbappmysqld-1                                                    2/2     Running   0              147m
        ndbmgmd-0                                                         1/1     Running   0              152m
        ndbmgmd-1                                                         1/1     Running   0              152m
        ndbmtd-0                                                          2/2     Running   0              150m
        ndbmtd-1                                                          2/2     Running   0              151m
        ndbmysqld-0                                                       2/2     Running   0              147m
        ndbmysqld-1                                                       2/2     Running   0              147m
      2. Check the status of cnDBTier cluster1:
        $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
          
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
          
        [mysqld(API)]   8 node(s)
        id=56   @10.233.114.34  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.85.91  (mysql-8.4.3 ndb-8.4.3)
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)
        

        Note:

        Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. You can ignore these node IDs.
  4. Check if georeplication is enabled in cnDBTier cluster2 with respect to cnDBTier cluster3 by performing the following steps:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check if georeplication is enabled in cnDBTier cluster2 for every replication group with respect to cnDBTier cluster3.
    1. Ensure that the DB replication service in cnDBTier cluster2 is enabled and running successfully with respect to cnDBTier cluster3:
      $ kubectl -n cluster2 get pods | grep repl
      Sample output:
      mysql-cluster-cluster2-cluster1-replication-svc-5bdc46crzzhb   1/1     Running   0          3m11s
      mysql-cluster-cluster2-cluster3-replication-svc-7955b6b6fbdc   1/1     Running   2          3m39s
    2. Ensure that at least four georeplication SQL nodes are configured in cluster2:
      $ kubectl -n cluster2 get pods | grep ndbmysqld
      Sample output:
      ndbmysqld-0                                                       3/3     Running   0          4m14s
      ndbmysqld-1                                                       3/3     Running   0          3m53s
      ndbmysqld-2                                                       3/3     Running   0          3m21s
      ndbmysqld-3                                                       3/3     Running   0          3m01s
    3. Check the status of cnDBTier cluster2:
      $ kubectl -n cluster2 exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186 Cluster Configuration
      ---------------------
      [ndbd(NDB)]     2 node(s)
      id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
         
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
         
      [mysqld(API)]   10 node(s)
      id=56   @10.233.114.34  (mysql-8.4.3 ndb-8.4.3)
      id=57   @10.233.85.91  (mysql-8.4.3 ndb-8.4.3)
      id=58   @10.233.114.34  (mysql-8.4.3 ndb-8.4.3)
      id=59   @10.233.85.91  (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host)

    Note:

    • If cluster2 satisfies all the conditions in Step 3, then georeplication is enabled in cnDBTier cluster2. Otherwise, georeplication is not enabled in cluster2.
    • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for georeplication recovery. You can ignore these nodes.
  5. If georeplication is not enabled in cluster2 as per Step 3, then perform the following steps to enable georeplication in cluster2 with respect to cluster3. Else, skip to Step 5.

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then configure remote mate site name (cluster3) in cnDBTier cluster2 by following the Configuring Multiple Replication channel groups procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide and skip steps a and b in the following substeps.
    1. Configure the remote mate site name (cluster3) in cnDBTier cluster2 by enabling and configuring the replication service using the custom_values.yaml file:
      $ vi occndbtier/custom_values.yaml
      Sample output:
       dbreplsvcdeployments:
          ....
          # if pod prefix is given then use the unique smaller name for this db replication service.
          - name: cluster2-cluster3-replication-svc
            enabled: true
            mysql:
              dbtierservice: "mysql-connectivity-service"
              dbtierreplservice: "ndbmysqldsvc"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-2 LoadBalancer service
              primaryhost: "ndbmysqld-2.ndbmysqldsvc.cluster2.svc.occne4-cgbu-cne-dbtier"
              port: "3306"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-2 LoadBalancer service
              primarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              primaryhostserverid: "2002"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-3 LoadBalancer service
              secondaryhost: "ndbmysqld-3.ndbmysqldsvc.cluster2.svc.occne4-cgbu-cne-dbtier"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-3 LoadBalancer service
              secondarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              secondaryhostserverid: "2003"
            replication:
              # Local site replication service LoadBalancer ip can be configured.
              localsiteip: ""
              matesitename: "cluster3"
              remotesiteip: ""
    2. Configure the number of SQL pods in cnDBTier cluster2 to at least four in the custom_values.yaml file:

      Note:

      apiReplicaCount is set to "4" as you are adding the third site. The first two ndbmysqld are used for replication between site one and site two only. The two ndbmysqld pods that are newly added will be responsible for replication between site1 and site3.
      $ vi occndbtier/custom_values.yaml
      Sample output:
      apiReplicaCount: 4
    3. Upgrade cnDBTier cluster2 by using the CSAR package. For more information, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

      Note:

      The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then upgrade cnDBTier cluster2 by providing appropriate multi-group values file.
      $ helm upgrade mysql-cluster occndbtier --namespace cluster2 --no-hooks -f occndbtier/custom_values.yaml
      Sample output:
      Release "mysql-cluster" has been upgraded. Happy Helming!
      NAME: mysql-cluster
      LAST DEPLOYED:  Mon May 20 11:10:32 2025
      NAMESPACE: cluster2
      STATUS: deployed
      REVISION: 2
    4. Wait until the upgrade is complete and all the pods of cnDBTier cluster 2 are restarted. Verify the status of the cluster by performing the following steps:
      1. Check the status of pods running cluster2:
        $ kubectl get pods --namespace=cluster2
        Sample output:
        NAME                                                              READY   STATUS    RESTARTS       AGE
        mysql-cluster-cluster2-cluster1-replication-svc-578c6578c85d2z5   1/1     Running   0              3m
        mysql-cluster-cluster2-cluster3-replication-svc-578c6578c8kpzpv   1/1     Running   0              3m
        mysql-cluster-db-backup-manager-svc-688f7cf97d-dxjdt              1/1     Running   0              148m
        mysql-cluster-db-monitor-svc-7cdb4f4dd4-l6t87                     1/1     Running   0              148m
        ndbappmysqld-0                                                    2/2     Running   0              148m
        ndbappmysqld-1                                                    2/2     Running   0              147m
        ndbmgmd-0                                                         1/1     Running   0              152m
        ndbmgmd-1                                                         1/1     Running   0              152m
        ndbmtd-0                                                          2/2     Running   0              150m
        ndbmtd-1                                                          2/2     Running   0              151m
        ndbmysqld-0                                                       2/2     Running   0              147m
        ndbmysqld-1                                                       2/2     Running   0              147m
        ndbmysqld-2                                                       2/2     Running   0              148m
        ndbmysqld-3                                                       2/2     Running   0              149m
      2. Check the status of cnDBTier cluster2:
        $ kubectl -n cluster2 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
            
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
            
        [mysqld(API)]   10 node(s)
        id=56   @10.233.124.92  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.114.135  (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.114.34  (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.85.91  (mysql-8.4.3 ndb-8.4.3) 
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)

        Note:

        Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. You can ignore these node IDs.
  6. Install cnDBTier cluster3 by configuring cnDBTier cluster1 as the first mate site and cnDBTier cluster2 as the second mate site with at least four SQL pods. For information on installing the cnDBTier cluster, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    • If multiple replication channel groups are enabled, then install cnDBTier cluster3 by configuring cnDBTier cluster1 as first mate site and cnDBTier cluster2 as second mate site with at least eight SQL pods.
    • Do not run a Helm test when performing the conversion procedure.
    • If you are going to enable encryption, ensure that the new site uses the same encryption key.
  7. Check if georeplication channels are established between cnDBTier cluster3 and cnDBTier cluster1, and cnDBTier cluster3 and cnDBTier cluster2 by following the procedures described in the Checking Georeplication Status Between Clusters section.
  8. Restore the data of cnDBTier cluster1 or cnDBTier cluster2 in cnDBTier cluster3 and re-establish the georeplication channels between cnDBTier cluster1 and cnDBTier cluster3, and cnDBTier cluster2 and cnDBTier cluster3. You can perform the georeplication recovery using cnDBtier or CNC Console. For procedures on restoring the cluster data and re-establishing the georeplication channels, refer to the Georeplication Failure between cnDBTier Clusters in Three Site Replication section of Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  9. Check if georeplication channels are established between cnDBTier cluster3 and cnDBTier cluster1, and cnDBTier cluster3 and cnDBTier cluster2 by following the procedures described in the Checking Georeplication Status Between Clusters section.

    Note:

    As data is replicated to cnDBTier cluster3 and georeplication channels are established between cnDBTier cluster1 and cnDBTier cluster3, and cnDBTier cluster2 and cnDBTier cluster3, the newly added cnDBTier cluster (cnDBTier cluster3) can also be used by the NFs for database operations.

7.5.3 Adding cnDBTier Georedundant Cluster to Three-Site cnDBTier Clusters

This section describes the procedure to add a cnDBTier georedundant cnDBTier cluster4 to the existing three-site cnDBTier clusters (cnDBTier cluster1, cnDBTier cluster2, and cnDBTier cluster3).

  1. If TLS was configured on cnDBTier cluster 1, cluster 2, and cluster 3, then follow the Modifying cnDBTier Certificates to Establish TLS Between Georeplication Sites procedure to update the certificates in cnDBTier cluster 1, cluster 2, and cluster 3.

    Note:

    Perform this step on those cnDBTier clusters on which you want to reconfigure the CA certificates for the new cnDBTier cluster.
  2. Check if georeplication is enabled in cnDBTier cluster1 with respect to cnDBTier cluster4 by performing the following steps:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check if georeplication is enabled in cnDBTier cluster1 for every replication group with respect to cnDBTier cluster4.
    1. Ensure that the DB replication service in cnDBTier cluster1 is enabled and running successfully with respect to cnDBTier cluster4:
      $ kubectl -n cluster1 get pods | grep repl
      Sample output:
      mysql-cluster-cluster1-cluster2-replication-svc-5bdc46crzzhb   1/1     Running   0          3m11s
      mysql-cluster-cluster1-cluster3-replication-svc-7955b6b6fbdc   1/1     Running   2          3m39s
      mysql-cluster-cluster1-cluster4-replication-svc-bd977fc5bnd5   1/1     Running   2          3m11s
    2. Ensure that at least six georeplication SQL nodes are configured in cluster1:
      $ kubectl -n cluster1 get pods | grep ndbmysqld
      Sample output:
      ndbmysqld-0                                                       3/3     Running   0          4m14s
      ndbmysqld-1                                                       3/3     Running   0          3m53s
      ndbmysqld-2                                                       3/3     Running   0          3m21s
      ndbmysqld-3                                                       3/3     Running   0          3m01s
      ndbmysqld-4                                                       3/3     Running   0          2m48s
      ndbmysqld-5                                                       3/3     Running   0          2m31s
    3. Check the status of cnDBTier cluster1:
      $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186 Cluster Configuration
      ---------------------
      [ndbd(NDB)]     2 node(s)
      id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
          
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
          
      [mysqld(API)]   12 node(s)
      id=56   @10.233.114.34   (mysql-8.4.3 ndb-8.4.3)
      id=57   @10.233.85.91    (mysql-8.4.3 ndb-8.4.3)
      id=58   @10.233.114.34   (mysql-8.4.3 ndb-8.4.3)
      id=59   @10.233.85.91    (mysql-8.4.3 ndb-8.4.3)
      id=60   @10.233.62.226   (mysql-8.4.3 ndb-8.4.3)
      id=61   @10.233.14.15    (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.85.93    (mysql-8.4.3 ndb-8.4.3)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host)

    Note:

    • If cluster1 satisfies all the conditions in Step 1, then georeplication is enabled in cnDBTier cluster1. Otherwise, georeplication is not enabled.
    • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for georeplication recovery. You can ignore these nodes. This note is applicable to all similar outputs in this procedure.
  3. If georeplication is not enabled in cluster1 as per Step 1, then perform the following steps to enable georeplication in cnDBTier cluster1 with respect to cnDBTier cluster4. Else, skip to Step 3.

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then configure remote mate site name (cluster4) in cnDBTier cluster1 by following the Configuring Multiple Replication channel groups procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide and skip steps a and b in the following substeps.
    1. Configure the remote mate site name (cluster4) in cnDBTier cluster1 by enabling and configuring the replication service using the custom_values.yaml file:
      $ vi occndbtier/custom_values.yaml
      Sample output:
      dbreplsvcdeployments:
          ....
          # if pod prefix is given then use the unique smaller name for this db replication service.
          - name: cluster1-cluster4-replication-svc
            # Local site replication service LoadBalancer ip can be configured.
            enabled: true
            mysql:
              dbtierservice: "mysql-connectivity-service"
              dbtierreplservice: "ndbmysqldsvc"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-4 LoadBalancer service
              primaryhost: "ndbmysqld-4.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier"
              port: "3306"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-4 LoadBalancer service
              primarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              primaryhostserverid: "1004"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-5 LoadBalancer service
              secondaryhost: "ndbmysqld-5.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-5 LoadBalancer service
              secondarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              secondaryhostserverid: "1005"
            replication:
              localsiteip: ""
              matesitename: "cluster4"
              remotesiteip: ""
    2. Configure the number of SQL pods in cnDBTier cluster1 to at least six in the custom_values.yaml:

      Note:

      apiReplicaCount is set to "6" as you are adding the third site. The first four ndbmysqld are used for replication between site one, site two, and site three only. The two ndbmysqld pods that are newly added will be responsible for replication between site1 and site4.
      $ vi occndbtier/custom_values.yaml
      Sample output:
      apiReplicaCount: 6
    3. Upgrade cnDBTier cluster1 by using the CSAR package:

      Note:

      The following command and example are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then upgrade cnDBTier cluster1 by providing appropriate multi-group values file.
      $ helm upgrade mysql-cluster occndbtier --namespace cluster1 --no-hooks -f occndbtier/custom_values.yaml
      Sample output:
      Release "mysql-cluster" has been upgraded. Happy Helming!
      NAME: mysql-cluster
      LAST DEPLOYED:  Mon May 20 11:10:32 2025
      NAMESPACE: cluster1
      STATUS: deployed
      REVISION: 2
    4. Wait until the upgrade is complete and all the pods of cnDBTier cluster 1 are restarted. Verify the status of the cluster by performing the following steps:
      1. Check the status of pods running cluster1:
        $ kubectl get pods --namespace=cluster1
        Sample output:
        NAME                                                              READY   STATUS    RESTARTS       AGE
        mysql-cluster-cluster1-cluster2-replication-svc-5bdc46crzzhb      1/1     Running   0              153m
        mysql-cluster-cluster1-cluster3-replication-svc-7955b6b6fbdc      1/1     Running   0              153m
        mysql-cluster-cluster1-cluster4-replication-svc-bd977fc5bnd5      1/1     Running   2              3m11s
        mysql-cluster-db-backup-manager-svc-c77df67b7-tqnd2               1/1     Running   0              153m
        mysql-cluster-db-monitor-svc-69ff969477-646tz                     1/1     Running   0              147m
        ndbappmysqld-0                                                    2/2     Running   0              148m
        ndbappmysqld-1                                                    2/2     Running   0              147m
        ndbmgmd-0                                                         1/1     Running   0              152m
        ndbmgmd-1                                                         1/1     Running   0              152m
        ndbmtd-0                                                          2/2     Running   0              150m
        ndbmtd-1                                                          2/2     Running   0              151m
        ndbmysqld-0                                                       2/2     Running   0              147m
        ndbmysqld-1                                                       2/2     Running   0              147m
        ndbmysqld-2                                                       2/2     Running   0              148m
        ndbmysqld-3                                                       2/2     Running   0              149m
        ndbmysqld-4                                                       2/2     Running   0              147m
        ndbmysqld-5                                                       2/2     Running   0              147m
      2. Check the status of cnDBTier cluster1:
        $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql- ndb-8.4.3, Nodegroup: 0, *)
            
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
            
        [mysqld(API)]   12 node(s)
        id=56   @10.233.124.92    (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.114.135   (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.114.34    (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.85.91     (mysql-8.4.3 ndb-8.4.3)
        id=60   @10.233.15.24     (mysql-8.4.3 ndb-8.4.3)
        id=61   @10.233.48.74     (mysql-8.4.3 ndb-8.4.3)
        id=70   @10.233.127.117   (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93     (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)

        Note:

        Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. You can ignore these node IDs.
  4. Check if georeplication is enabled in cnDBTier cluster2 with respect to cnDBTier cluster4 by performing the following steps:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check if georeplication is enabled in cnDBTier cluster2 for every replication group with respect to cnDBTier cluster4.
    1. Ensure that the DB replication service in cnDBTier cluster2 is enabled and running successfully with respect to cnDBTier cluster4:
      $ kubectl -n cluster2 get pods | grep repl
      Sample output:
      mysql-cluster-cluster2-cluster1-replication-svc-5bdc46crzzhb   1/1     Running   0          3m11s
      mysql-cluster-cluster2-cluster3-replication-svc-7955b6b6fbdc   1/1     Running   2          3m39s
      mysql-cluster-cluster2-cluster4-replication-svc--556c99fcdzh   1/1     Running   2          3m05s
    2. Ensure that at least six georeplication SQL nodes are configured in cluster2:

      Note:

      apiReplicaCount is set to "6" as you are adding the third site. The first four ndbmysqld are used for replication between site one, site two, and site three only. The two ndbmysqld pods that are newly added will be responsible for replication between site1 and site4.
      $ kubectl -n cluster2 get pods | grep ndbmysqld
      Sample output:
      ndbmysqld-0                                                       3/3     Running   0          4m14s
      ndbmysqld-1                                                       3/3     Running   0          3m53s
      ndbmysqld-2                                                       3/3     Running   0          3m21s
      ndbmysqld-3                                                       3/3     Running   0          3m01s
      ndbmysqld-4                                                       3/3     Running   0          2m59s
      ndbmysqld-5                                                       3/3     Running   0          2m43s
      
    3. Check the status of cnDBTier cluster2:
      $ kubectl -n cluster2 exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186 Cluster Configuration
      ---------------------
      [ndbd(NDB)]     2 node(s)
      id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
          
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
          
      [mysqld(API)]   12 node(s)
      id=56   @10.233.114.34    (mysql-8.4.3 ndb-8.4.3)
      id=57   @10.233.85.91     (mysql-8.4.3 ndb-8.4.3)
      id=58   @10.233.114.34    (mysql-8.4.3 ndb-8.4.3)
      id=59   @10.233.85.91     (mysql-8.4.3 ndb-8.4.3)
      id=60   @10.233.24.121    (mysql-8.4.3 ndb-8.4.3)
      id=61   @10.233.85.22     (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.127.117   (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.85.93     (mysql-8.4.3 ndb-8.4.3)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host)

    Note:

    • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for georeplication recovery. You can ignore these nodes.
    • If cluster2 satisfies all the conditions in Step 3, then georeplication is enabled in cnDBTier cluster2. Otherwise, georeplication is not enabled in cluster2.
  5. If georeplication is not enabled in cluster2 as per Step 3, then perform the following steps to enable georeplication in cluster2 with respect to cluster4. Else, skip to Step 5.

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then configure remote mate site name (cluster4) in cnDBTier cluster2 by following the Configuring Multiple Replication channel groups procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide and skip steps a and b in the following substeps.
    1. Configure the remote mate site name (cluster4) in cnDBTier cluster2 by enabling and configuring the replication service using the custom_values.yaml file:
      $ vi occndbtier/custom_values.yaml
      Sample output:
      dbreplsvcdeployments:
          ....
          # if pod prefix is given then use the unique smaller name for this db replication service.
          - name: cluster2-cluster4-replication-svc
            enabled: true
            mysql:
              dbtierservice: "mysql-connectivity-service"
              dbtierreplservice: "ndbmysqldsvc"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-4 LoadBalancer service
              primaryhost: "ndbmysqld-4.ndbmysqldsvc.cluster2.svc.occne4-cgbu-cne-dbtier"
              port: "3306"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-4 LoadBalancer service
              primarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              primaryhostserverid: "2004"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-5 LoadBalancer service
              secondaryhost: "ndbmysqld-5.ndbmysqldsvc.cluster2.svc.occne4-cgbu-cne-dbtier"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-5 LoadBalancer service
              secondarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              secondaryhostserverid: "2005"
            replication:
              # Local site replication service LoadBalancer ip can be configured.
              localsiteip: ""
              matesitename: "cluster4"
              remotesiteip: ""
    2. Configure the number of SQL pods in cnDBTier cluster2 to at least six in the custom_values.yaml file:

      Note:

      apiReplicaCount is set to "6" as you are adding the fourth site. The first four ndbmysqld are used for replication between site one, site two, and site three only. The two ndbmysqld pods that are newly added will be responsible for replication between site1 and site4.
      $ vi occndbtier/custom_values.yaml
      Sample output:
      apiReplicaCount: 6
    3. Upgrade cnDBTier cluster2 by using the CSAR package. For more information, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

      Note:

      The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then upgrade cnDBTier cluster2 by providing appropriate multi-group values file.
      $ helm upgrade mysql-cluster occndbtier --namespace cluster2 --no-hooks -f occndbtier/custom_values.yaml
      Sample output:
      Release "mysql-cluster" has been upgraded. Happy Helming!
      NAME: mysql-cluster
      LAST DEPLOYED:  Mon May 20 11:10:32 2025
      NAMESPACE: cluster2
      STATUS: deployed
      REVISION: 2
    4. Wait until the upgrade is complete and all the pods of cnDBTier cluster 2 are restarted. Verify the status of the cluster by performing the following steps:
      1. Check the status of pods running cluster2:
        $ kubectl get pods --namespace=cluster2
        Sample output:
        NAME                                                              READY   STATUS    RESTARTS       AGE
        mysql-cluster-cluster2-cluster1-replication-svc-5bdc46cryyaa      1/1     Running   0              153m
        mysql-cluster-cluster2-cluster3-replication-svc-7955b6b6ghht      1/1     Running   0              153m
        mysql-cluster-cluster2-cluster4-replication-svc-c574d6cj6mlp      1/1     Running   2              153m
        mysql-cluster-db-backup-manager-svc-c77df67b7-tqnd2               1/1     Running   0              153m
        mysql-cluster-db-monitor-svc-69ff969477-646tz                     1/1     Running   0              147m
        ndbappmysqld-0                                                    2/2     Running   0              148m
        ndbappmysqld-1                                                    2/2     Running   0              147m
        ndbmgmd-0                                                         1/1     Running   0              152m
        ndbmgmd-1                                                         1/1     Running   0              152m
        ndbmtd-0                                                          2/2     Running   0              150m
        ndbmtd-1                                                          2/2     Running   0              151m
        ndbmysqld-0                                                       2/2     Running   0              147m
        ndbmysqld-1                                                       2/2     Running   0              147m
        ndbmysqld-2                                                       2/2     Running   0              148m
        ndbmysqld-3                                                       2/2     Running   0              149m
        ndbmysqld-4                                                       2/2     Running   0              147m
        ndbmysqld-5                                                       2/2     Running   0              147m
      2. Check the status of cnDBTier cluster2:
        $ kubectl -n cluster2 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
             
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
             
        [mysqld(API)]   12 node(s)
        id=56   @10.233.124.92  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.114.135  (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.124.111  (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.110.81  (mysql-8.4.3 ndb-8.4.3)
        id=60   @10.233.110.41  (mysql-8.4.3 ndb-8.4.3)
        id=61   @10.233.98.102  (mysql-8.4.3 ndb-8.4.3)
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)

        Note:

        Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. You can ignore these node IDs.
  6. Check if georeplication is enabled in cnDBTier cluster3 with respect to cnDBTier cluster4 by performing the following steps:

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check if georeplication is enabled in cnDBTier cluster3 for every replication group with respect to cnDBTier cluster4.
    1. Ensure that the DB replication service in cnDBTier cluster3 is enabled and running successfully with respect to cnDBTier cluster4:
      $ kubectl -n cluster3 get pods | grep repl
      Sample output:
      mysql-cluster-cluster3-cluster1-replication-svc-5bdc46crzzhb   1/1     Running   0          3m11s
      mysql-cluster-cluster3-cluster2-replication-svc-7955b6b6fbdc   1/1     Running   2          3m39s
      mysql-cluster-cluster3-cluster4-replication-svc--556c99fcdzh   1/1     Running   2          3m05s
    2. Ensure that at least six georeplication SQL nodes are configured in cluster3:
      $ kubectl -n cluster3 get pods | grep ndbmysqld
      Sample output:
      ndbmysqld-0                                                       3/3     Running   0          4m14s
      ndbmysqld-1                                                       3/3     Running   0          3m53s
      ndbmysqld-2                                                       3/3     Running   0          3m21s
      ndbmysqld-3                                                       3/3     Running   0          3m01s
      ndbmysqld-4                                                       3/3     Running   0          2m59s
      ndbmysqld-5                                                       3/3     Running   0          2m43s
    3. Check the status of cnDBTier cluster3:
      $ kubectl -n cluster3 exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186 Cluster Configuration
      ---------------------
      [ndbd(NDB)]     2 node(s)
      id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
          
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
          
      [mysqld(API)]   12 node(s)
      id=56   @10.233.71.24     (mysql-8.4.3 ndb-8.4.3)
      id=57   @10.233.119.18    (mysql-8.4.3 ndb-8.4.3)
      id=58   @10.233.69.23     (mysql-8.4.3 ndb-8.4.3)
      id=59   @10.233.116.21    (mysql-8.4.3 ndb-8.4.3)
      id=60   @10.233.70.16     (mysql-8.4.3 ndb-8.4.3)
      id=61   @10.233.78.20     (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.127.117   (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.85.93     (mysql-8.4.3 ndb-8.4.3)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host)

      Note:

      • The node IDs 222 to 225 in the sample output are shown as "not connected" as these nodes are added as empty slot IDs used for georeplication recovery. You can ignore these nodes.
      • If cluster3 satisfies all the conditions in Step 5, then georeplication is enabled in cnDBTier cluster3. Otherwise, georeplication is not enabled in cluster3.
  7. If georeplication is not enabled in cluster3 as per Step 5, then perform the following steps to enable georeplication in cluster3 with respect to cluster4. Else, skip to Step 7.

    Note:

    The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then configure remote mate site name (cluster4) in cnDBTier cluster3 by following the Configuring Multiple Replication channel groups procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide and skip steps a and b in the following substeps.
    1. Configure the remote mate site name (cluster4) in cnDBTier cluster3 by enabling and configuring the replication service using the custom_values.yaml file:
      $ vi occndbtier/custom_values.yaml
      Sample output:
      dbreplsvcdeployments:
          ....
          # if pod prefix is given then use the unique smaller name for this db replication service.
          - name: cndbtiercluster3-cndbtiercluster4-replication-svc
            enabled: true
            mysql:
              dbtierservice: "mysql-connectivity-service"
              dbtierreplservice: "ndbmysqldsvc"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-4 LoadBalancer service
              primaryhost: "ndbmysqld-4.ndbmysqldsvc.cluster3.svc.occne4-cgbu-cne-dbtier"
              port: "3306"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-4 LoadBalancer service
              primarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              primaryhostserverid: "3004"
              # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-5 LoadBalancer service
              secondaryhost: "ndbmysqld-5.ndbmysqldsvc.cluster3.svc.occne4-cgbu-cne-dbtier"
              # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-5 LoadBalancer service
              secondarysignalhost: ""
              # serverid is unique; retrieve it for the site being configured
              secondaryhostserverid: "3005"
            replication:
              # Local site replication service LoadBalancer ip can be configured.
              localsiteip: ""
              matesitename: "cluster4"
              remotesiteip: ""
    2. Configure the number of SQL pods in cnDBTier cluster3 to at least six in the custom_values.yaml file:

      Note:

      apiReplicaCount is set to "6" as you are adding the third site. The first four ndbmysqld are used for replication between site one, site two, and site three only. The two ndbmysqld pods that are newly added will be responsible for replication between site1 and site4.
      $ vi occndbtier/custom_values.yaml
      Sample output:
      apiReplicaCount: 6
    3. Upgrade cnDBTier cluster3 by using the CSAR package. For more information, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

      Note:

      The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then upgrade cnDBTier cluster3 by providing appropriate multi group values file.
      $ helm upgrade mysql-cluster occndbtier --namespace cluster3 --no-hooks -f occndbtier/custom_values.yaml
      Sample output:
      Release "mysql-cluster" has been upgraded. Happy Helming!
      NAME: mysql-cluster
      LAST DEPLOYED:  Mon May 20 11:10:32 2025
      NAMESPACE: cluster3
      STATUS: deployed
      REVISION: 2
    4. Wait until the upgrade is complete and all the pods of cnDBTier cluster 3 are restarted. Verify the status of the cluster by performing the following steps:
      1. Check the status of pods running cluster3:
        $ kubectl get pods --namespace=cluster3
        Sample output:
        mysql-cluster-cluster3-cluster1-replication-svc-5bdc46dsttic      1/1     Running   0              153m
        mysql-cluster-cluster3-cluster2-replication-svc-7955b6c7eced      1/1     Running   0              153m
        mysql-cluster-cluster3-cluster4-replication-svc-556c99fdeyah      1/1     Running   2              153m
        mysql-cluster-db-backup-manager-svc-8bf9448b8-w8pvf               1/1     Running   0              153m
        mysql-cluster-db-monitor-svc-8bf9559b8-v8qwg                      1/1     Running   0              147m
        ndbappmysqld-0                                                    2/2     Running   0              148m
        ndbappmysqld-1                                                    2/2     Running   0              147m
        ndbmgmd-0                                                         1/1     Running   0              152m
        ndbmgmd-1                                                         1/1     Running   0              152m
        ndbmtd-0                                                          2/2     Running   0              150m
        ndbmtd-1                                                          2/2     Running   0              151m
        ndbmysqld-0                                                       2/2     Running   0              147m
        ndbmysqld-1                                                       2/2     Running   0              147m
        ndbmysqld-2                                                       2/2     Running   0              148m
        ndbmysqld-3                                                       2/2     Running   0              149m
        ndbmysqld-4                                                       2/2     Running   0              147m
        ndbmysqld-5                                                       2/2     Running   0              147m
      2. Check the status of cnDBTier cluster3:
        $ kubectl -n cluster3 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
            
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
            
        [mysqld(API)]   12 node(s)
        id=56   @10.233.71.24     (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.119.18    (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.69.23     (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.116.21    (mysql-8.4.3 ndb-8.4.3)
        id=60   @10.233.70.16     (mysql-8.4.3 ndb-8.4.3)
        id=61   @10.233.78.20     (mysql-8.4.3 ndb-8.4.3)
        id=70   @10.233.127.117   (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93     (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)

        Note:

        Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. You can ignore these node IDs.
  8. Install cnDBTier cluster4 by configuring cnDBTier cluster1 as the first mate site and cnDBTier cluster2 as the second mate site with at least six SQL pods. For information on installing the cnDBTier cluster, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    • The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then install cnDBTier cluster4 by configuring cnDBTier cluster1 as first mate site, cnDBTier cluster2 as second mate site and cnDBTier cluster3 as third mate site with at least twelve SQL pods.
    • Do not run a Helm test when you are performing a conversion procedure.
    • If you are going to enable encryption, ensure that the new site uses the same encryption key.
  9. Check if georeplication channels are established between cnDBTier cluster1 and cnDBTier cluster4, cnDBTier cluster2 and cnDBTier cluster4, and cnDBTier cluster3 and cnDBTier cluster4 by following the procedures described in the Checking Georeplication Status Between Clusters section.
  10. Restore the data of cnDBTier cluster1, cnDBTier cluster2, or cnDBTier cluster3 in cnDBTier cluster4 and re-establish the georeplication channels between cnDBTier cluster1 and cnDBTier cluster4, cnDBTier cluster2 and cnDBTier cluster4, and cnDBTier cluster3 and cnDBTier cluster4. You can perform the georeplication recovery using cnDBtier or CNC Console. For procedures on restoring the cluster data and re-establishing the georeplication channels, refer to the Georeplication Failure between cnDBTier Clusters in Four Site Replication section of Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  11. Check if georeplication channels are established between cnDBTier cluster1 and cnDBTier cluster4, cnDBTier cluster2 and cnDBTier cluster4, and cnDBTier cluster3 and cnDBTier cluster4 by following the procedures described in the Checking Georeplication Status Between Clusters section.

    Note:

    As data is replicated to cnDBTier cluster4 and georeplication channels are established between cnDBTier cluster1 and cnDBTier cluster4, cnDBTier cluster2 and cnDBTier cluster4, and cnDBTier cluster3 and cnDBTier cluster4, the newly added cnDBTier cluster (cnDBTier cluster4) can be used by the NFs for database operations.

7.6 Removing a Georedundant cnDBTier Cluster

This section describes the procedures to remove a georedundant cluster from an existing multisite setup (two-site, three-site, or four-site) using the dbtremovesite script.

The following terminologies are used in the procedures to refer to a cluster in a multisite:
  • cnDBTier Cluster1 (cluster1): The first cloud native cnDBTier cluster in a two-site, three-site, or four-site georeplication setup.
  • cnDBTier Cluster2 (cluster2): The second cloud native cnDBTier cluster in a two-site, three-site, or four-site georeplication setup.
  • cnDBTier Cluster3 (cluster3): The third cloud native cnDBTier cluster in a two-site, three-site, or four-site georeplication setup.
  • cnDBTier Cluster4 (cluster4): The fourth cloud native cnDBTier cluster in a two-site, three-site, or four-site georeplication setup.

Prerequisites

Before performing the following procedures, ensure that you meet the following prerequisites:
  • Multiple cnDBTier clusters must be installed.
  • The cnDBTier cluster that needs to be removed must be a part of the multisite setup.

Note:

  • To perform these procedures, georeplication may or may not be established correctly between multiple cnDBTier clusters.
  • Ensure that there is no traffic in the cnDBTier cluster that is being removed from the multisite setup.

7.6.1 Extracting cnDBTier Cluster Script

This section describes the procedure to extract the cnDBTier cluster script.

  1. Extract CSAR package:
    $ unzip occndbtier_csar_vzw_24_2_0_0_0.zip
  2. Run the source_me file:
    $ cd Artifacts/Scripts/tools
    $ source ./source_me 
    Sample output:
    NOTE: source_me must be sourced while your current directory is the directory with the source_me file.
     
    Enter cndbtier namespace: cluster1
    DBTIER_NAMESPACE = "cluster1"
     
    DBTIER_LIB=/home/cgbu-cne-dbtier/csar/rc4.23.4.1/Artifacts/Scripts/tools/lib
     
    Adding /home/cgbu-cne-dbtier/csar/24.2.0/Artifacts/Scripts/tools/bin to PATH
    PATH=/home/cgbu-cne-dbtier/csar/24.2.0/Artifacts/Scripts/tools/bin:/home/cgbu-cne-dbtier/.local/bin:/home/cgbu-cne-dbtier/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/var/occne/cluster/thrust7a/artifacts/istio-1.15.3/bin/:/var/occne/cluster/thrust7a/artifacts:/home/cgbu-cne-dbtier/luis/bin:/home/cgbu-cne-dbtier/luis/usr/bin
  3. Check if the dbtremovesite script is present in the lib and bin directory:
    $ cd Artifacts/Scripts/tools/bin
    $ ls -lrth dbtremovesite
    Sample output:
    total 104K
    -rwx------. 1 cloud-user cloud-user 120 Apr 23 05:46 dbtremovesite

Note:

The dbtremovesite script does not take any arguments for removing the cnDBTier cluster details from the other available cnDBTier clusters. You must perform the following procedures to remove the cluster.

7.6.2 Removing cnDBTier Cluster 4

This section provides the procedure to remove cnDBTier cluster 4 from a georeplication multisite setup.

Note:

The cluster and namespace names used in this procedure may vary depending on your namespace and cluster name. Ensure that you replace the values as per your setup.
  1. Perform the following steps to uninstall cnDBTier cluster 4 in a site. For more information, see the "Uninstalling cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    1. Run the following commands to uninstall cnDBTier cluster 4:
      $ helm uninstall mysql-cluster -n cluster4
    2. Run the following commands to delete the PVCs of cnDBTier cluster 4:
      $ kubectl -n cluster4 get pvc
      $ kubectl -n cluster4 delete pvc <PVC Names>

      where, <PVC Names> is the PVC names of cnDBTier Cluster 4.

  2. Perform the following steps to delete the records related to cluster 4 from all the other available sites (cluster 1, cluster 2, and cluster 3):
    1. Log in to cnDBTier cluster 1 and delete the records related to cluster 4 using the dbtremovesite script:

      Note:

      source_me must be sourced while your current directory is the directory with the source_me file.
      $ source ./source_me
      NOTE: source_me must be sourced while your current directory is the directory with the source_me file.
         
      Enter cndbtier namespace: cluster1
      
      $ dbtremovesite

      When the dbtremovesite script prompts you to select the cluster that needs to be removed, select cluster 4 by typing yes against the cluster 4 option. Type no for the rest of the cluster options.

      Sample output:
      Does cluster1 need to be removed (yes/no/exit)? no
      Does cluster2 need to be removed (yes/no/exit)? no
      Does cluster3 need to be removed (yes/no/exit)? no
      Does cluster4 need to be removed (yes/no/exit)? yes
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_NAME = cluster1
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_ID = 1
      2024-04-23T05:47:00Z INFO - Removing cluster4 (Site 4)
      2024-04-23T05:47:00Z INFO - Make sure that cnDBTier cluster(cluster4) is uninstalled in Site 4
      DO YOU WANT TO PROCEED (yes/no/exit)? yes
      ARE YOU SURE (yes/no/exit)? yes
      -------------------------
      -------------------------
      -------------------------
      2024-04-23T05:47:45Z INFO - Ended timer PHASE 6: 1713851265
      2024-04-23T05:47:45Z INFO - PHASE 6 took: 00 hr. 00 min. 02 sec.
      2024-04-23T05:47:45Z INFO - Ended timer dbtremovesite: 1713851265
      2024-04-23T05:47:45Z INFO - dbtremovesite took: 00 hr. 00 min. 57 sec.
      2024-04-23T05:47:45Z INFO - dbtremovesite completed successfully
    2. Repeat step a on cluster 2 to delete the records related to cluster 4.
    3. Repeat step a on cluster 3 to delete the records related to cluster 4.
  3. Remove the remote site IP address configurations related to cluster 4 and restart all the DB replication service deployments on all the other available sites (cluster 1, cluster 2, and cluster 3):
    1. Perform the following steps to remove the remote site IP address configurations related to cluster 4 and restart all the DB replication service deployments on cluster 1:
      1. Log in to the Bastion Host of cluster 1.
      2. If fixed IP addresses are not configured to the LoadBalancer services, configure remotesiteip with "" for cluster 4 remote site in the custom_values.yaml file of cluster 1:
        $ vi cndbtier_cluster1_custom_values.yaml
         
        - name: cluster1-cluster4-replication-svc
          enabled: true
          --------------------------------------
          --------------------------------------
          --------------------------------------
            replication:
              localsiteip: ""
              localsiteport: "80"
              channelgroupid: "1"
              matesitename: "cluster4"
              remotesiteip: ""
              remotesiteport: "80"
      3. If fixed IP addresses are not configured to the LoadBalancer services, upgrade cnDBTier cluster 1:
        $ helm upgrade mysql-cluster occndbtier -n cluster1 -f cndbtier_cluster1_custom_values.yaml
      4. Scale down and scale up all the DB replication service deployments in cluster 1:
        $ kubectl -n cluster1 scale deploy $(kubectl -n cluster1 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=0
        $ kubectl -n cluster1 scale deploy $(kubectl -n cluster1 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=1
      5. Wait until all the pods are up on cluster 1 and verify:
        $ kubectl -n cluster1 get pods
    2. Repeat step a to remove the remote site IP address configurations related to cluster 4 and restart all the DB replication service deployments on cluster 2:

      Note:

      Replace cluster1 in the commands (in step a) with cluster2. However, the values may vary depending on the cluster and namespace names in your setup.
    3. Repeat step a to remove the remote site IP address configurations related to cluster 4 and restart all the DB replication service deployments on cluster 3:

      Note:

      Replace cluster1 in the commands (in step a) with cluster3. However, the values may vary depending on the cluster and namespace names in your setup.

7.6.3 Removing cnDBTier Cluster 3

This section provides the procedure to remove cnDBTier cluster 3 from a georeplication multisite setup.

Note:

The cluster and namespace names used in this procedure may vary depending on your namespace and cluster name. Ensure that you replace the values as per your setup.
  1. Perform the following steps to uninstall cnDBTier cluster 3 in a site. For more information, see the "Uninstalling cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    1. Run the following commands to uninstall cnDBTier cluster 3:
      $ helm uninstall mysql-cluster -n cluster3
    2. Run the following commands to delete the PVCs of cnDBTier cluster 3:
      $ kubectl -n cluster3 get pvc
      $ kubectl -n cluster3 delete pvc <PVC Names>

      where, <PVC Names> is the PVC names of cnDBTier Cluster 3.

  2. Perform the following steps to delete the records related to cluster 3 from all the other sites (cluster 1, cluster 2, and cluster 4):
    1. Log in to cnDBTier cluster 1 and delete the records related to cluster 3 using the dbtremovesite script:
      $ source ./source_me
      
      NOTE: source_me must be sourced while your current directory is the directory with the source_me file.
      
      Enter cndbtier namespace: cluster1
      
      $ dbtremovesite

      When the dbtremovesite script prompts you to select the cluster that needs to be removed, select cluster 3 by typing yes against the cluster 3 option. Type no for the rest of the cluster options.

      Sample output:
      Does cluster1 need to be removed (yes/no/exit)? no
      Does cluster2 need to be removed (yes/no/exit)? no
      Does cluster3 need to be removed (yes/no/exit)? yes
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_NAME = cluster1
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_ID = 1
      2024-04-23T05:47:00Z INFO - Removing cluster3 (Site 3)
      2024-04-23T05:47:00Z INFO - Make sure that cnDBTier cluster(cluster3) is uninstalled in Site 3
      DO YOU WANT TO PROCEED (yes/no/exit)? yes
      ARE YOU SURE (yes/no/exit)? yes
      -------------------------
      -------------------------
      -------------------------
      2024-04-23T05:47:45Z INFO - Ended timer PHASE 6: 1713851265
      2024-04-23T05:47:45Z INFO - PHASE 6 took: 00 hr. 00 min. 02 sec.
      2024-04-23T05:47:45Z INFO - Ended timer dbtremovesite: 1713851265
      2024-04-23T05:47:45Z INFO - dbtremovesite took: 00 hr. 00 min. 57 sec.
      2024-04-23T05:47:45Z INFO - dbtremovesite completed successfully
    2. Repeat step a on cluster 2 to delete the records related to cluster 3.
    3. Repeat step a on cluster 4 to delete the records related to cluster 3.
  3. Remove the remote site IP address configurations related to cluster 3 and restart all the DB replication service deployments on all the other sites (cluster 1, cluster 2, and cluster 4):
    1. Perform the following steps to remove the remote site IP address configurations related to cluster 3 and restart all the DB replication service deployments on cluster 1:
      1. Log in to the Bastion Host of cluster 1.
      2. If fixed IP addresses are not configured to the LoadBalancer services, configure remotesiteip with "" for cluster 3 remote site in the custom_values.yaml file of cluster 1:
        $ vi cndbtier_cluster1_custom_values.yaml
          
        - name: cluster1-cluster3-replication-svc
          enabled: true
          --------------------------------------
          --------------------------------------
          --------------------------------------
            replication:
              localsiteip: ""
              localsiteport: "80"
              channelgroupid: "1"
              matesitename: "cluster3"
              remotesiteip: ""
              remotesiteport: "80"
      3. If fixed IP addresses are not configured to the LoadBalancer services, upgrade cnDBTier cluster 1:
        $ helm upgrade mysql-cluster occndbtier -n cluster1 -f cndbtier_cluster1_custom_values.yaml
      4. Scale down and scale up all the DB replication service deployments in cluster 1:
        $ kubectl -n cluster1 scale deploy $(kubectl -n cluster1 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=0
        $ kubectl -n cluster1 scale deploy $(kubectl -n cluster1 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=1
      5. Wait until all the pods are up on cluster 1 and verify:
        $ kubectl -n cluster1 get pods
    2. Repeat step a to remove the remote site IP address configurations related to cluster 3 and restart all the DB replication service deployments on cluster 2:

      Note:

      Replace cluster1 in the commands with cluster2. However, the values may vary depending on the cluster and namespace names in your setup.
    3. Repeat step a to remove the remote site IP address configurations related to cluster 3 and restart all the DB replication service deployments on cluster 4:

      Note:

      Replace cluster1 in the commands with cluster4. However, the values may vary depending on the cluster and namespace names in your setup.

7.6.4 Removing cnDBTier Cluster 2

This section provides the procedure to remove cnDBTier cluster 2 from a georeplication multisite setup.

Note:

The cluster and namespace names used in this procedure may vary depending on your namespace and cluster name. Ensure that you replace the values as per your setup.
  1. Perform the following steps to uninstall cnDBTier cluster 2 in a site. For more information, see the "Uninstalling cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    1. Run the following commands to uninstall cnDBTier cluster 2:
      $ helm uninstall mysql-cluster -n cluster2
    2. Run the following commands to delete the PVCs of cnDBTier cluster 2:
      $ kubectl -n cluster2 get pvc
      $ kubectl -n cluster2 delete pvc <PVC Names>

      where, <PVC Names> is the PVC names of cnDBTier cluster 2.

  2. Perform the following steps to delete the records related to cluster 2 from all the other available sites (cluster 1, cluster 3, and cluster 4):
    1. Log in to cnDBTier cluster 1 and delete the records related to cluster 2 using the dbtremovesite script:
      $ source ./source_me
      
      NOTE: source_me must be sourced while your current directory is the directory with the source_me file.
      
      Enter cndbtier namespace: cluster1
      
      $ dbtremovesite

      When the dbtremovesite script prompts you to select the cluster that needs to be removed, select cluster 2 by typing yes against the cluster 2 option. Type no for the rest of the cluster options.

      Sample output:
      Does cluster1 need to be removed (yes/no/exit)? no
      Does cluster2 need to be removed (yes/no/exit)? yes
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_NAME = cluster1
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_ID = 1
      2024-04-23T05:47:00Z INFO - Removing cluster2 (Site 2)
      2024-04-23T05:47:00Z INFO - Make sure that cnDBTier cluster(cluster2) is uninstalled in Site 2
      DO YOU WANT TO PROCEED (yes/no/exit)? yes
      ARE YOU SURE (yes/no/exit)? yes
      -------------------------
      -------------------------
      -------------------------
      2024-04-23T05:47:45Z INFO - Ended timer PHASE 6: 1713851265
      2024-04-23T05:47:45Z INFO - PHASE 6 took: 00 hr. 00 min. 02 sec.
      2024-04-23T05:47:45Z INFO - Ended timer dbtremovesite: 1713851265
      2024-04-23T05:47:45Z INFO - dbtremovesite took: 00 hr. 00 min. 57 sec.
      2024-04-23T05:47:45Z INFO - dbtremovesite completed successfully
    2. Repeat step a on cluster 3 to delete the records related to cluster 2.
    3. Repeat step a on cluster 4 to delete the records related to cluster 2.
  3. Remove the remote site IP address configurations related to cluster 2 and restart all the DB replication service deployments on all the other sites (cluster 1, cluster 3, and cluster 4):
    1. Perform the following steps to remove the remote site IP address configurations related to cluster 2 and restart all the DB replication service deployments on cluster 1:
      1. Log in to the Bastion Host of cluster 1.
      2. If fixed IP addresses are not configured to the LoadBalancer services, configure remotesiteip with "" for cluster 2 remote site in the custom_values.yaml file of cluster 1:
        $ vi cndbtier_cluster1_custom_values.yaml
          
        - name: cluster1-cluster2-replication-svc
          enabled: true
          --------------------------------------
          --------------------------------------
          --------------------------------------
            replication:
              localsiteip: ""
              localsiteport: "80"
              channelgroupid: "1"
              matesitename: "cluster2"
              remotesiteip: ""
              remotesiteport: "80"
      3. If fixed IP addresses are not configured to the LoadBalancer services, upgrade cnDBTier cluster 1:
        $ helm upgrade mysql-cluster occndbtier -n cluster1 -f cndbtier_cluster1_custom_values.yaml
      4. Scale down and scale up all the DB replication service deployments in cluster 1:
        $ kubectl -n cluster1 scale deploy $(kubectl -n cluster1 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=0
        $ kubectl -n cluster1 scale deploy $(kubectl -n cluster1 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=1
      5. Wait until all the pods are up on cluster 1 and verify:
        $ kubectl -n cluster1 get pods
    2. Repeat step a to remove the remote site IP address configurations related to cluster 2 and restart all the DB replication service deployments on cluster 3:

      Note:

      Replace cluster1 in the commands with cluster3. However, the values may vary depending on the cluster and namespace names in your setup.
    3. Repeat step a to remove the remote site IP address configurations related to cluster 2 and restart all the DB replication service deployments on cluster 4:

      Note:

      Replace cluster1 in the commands with cluster4. However, the values may vary depending on the cluster and namespace names in your setup.

7.6.5 Removing cnDBTier Cluster 1

This section provides the procedure to remove cnDBTier cluster 1 from a georeplication multisite setup.

Note:

The cluster and namespace names used in this procedure may vary depending on your namespace and cluster name. Ensure that you replace the values as per your setup.
  1. Perform the following steps to uninstall cnDBTier cluster 1 in a site. For more information, see the "Uninstalling cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    1. Run the following commands to uninstall cnDBTier cluster 1:
      $ helm uninstall mysql-cluster -n cluster1
    2. Run the following commands to delete the PVCs of cnDBTier cluster 1:
      $ kubectl -n cluster1 get pvc
      $ kubectl -n cluster1 delete pvc <PVC Names>

      where, <PVC Names> is the PVC names of cnDBTier cluster 1.

  2. Perform the following steps to delete the records related to cluster 1 from all the other available sites (cluster 2, cluster 3, and cluster 4):
    1. Log in to cnDBTier cluster 1 and delete the records related to cluster 1 using the dbtremovesite script:
      $ source ./source_me
      
      NOTE: source_me must be sourced while your current directory is the directory with the source_me file.
      
      Enter cndbtier namespace: cluster4
      
      $ dbtremovesite

      When the dbtremovesite script prompts you to select the cluster that needs to be removed, select cluster 4 by typing yes against the cluster 4 option. Type no for the rest of the cluster options.

      Sample output:
      Does cluster1 need to be removed (yes/no/exit)? no
      Does cluster2 need to be removed (yes/no/exit)? no
      Does cluster3 need to be removed (yes/no/exit)? no
      Does cluster4 need to be removed (yes/no/exit)? yes
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_NAME = cluster1
      2024-04-23T05:47:00Z INFO - CURRENT_SITE_ID = 1
      2024-04-23T05:47:00Z INFO - Removing cluster4 (Site 4)
      2024-04-23T05:47:00Z INFO - Make sure that cnDBTier cluster(cluster4) is uninstalled in Site 4
      DO YOU WANT TO PROCEED (yes/no/exit)? yes
      ARE YOU SURE (yes/no/exit)? yes
      -------------------------
      -------------------------
      -------------------------
      2024-04-23T05:47:45Z INFO - Ended timer PHASE 6: 1713851265
      2024-04-23T05:47:45Z INFO - PHASE 6 took: 00 hr. 00 min. 02 sec.
      2024-04-23T05:47:45Z INFO - Ended timer dbtremovesite: 1713851265
      2024-04-23T05:47:45Z INFO - dbtremovesite took: 00 hr. 00 min. 57 sec.
      2024-04-23T05:47:45Z INFO - dbtremovesite completed successfully
    2. Repeat step a on cluster 3 to delete the records related to cluster 1.
    3. Repeat step a on cluster 4 to delete the records related to cluster 1.
  3. Remove the remote site IP address configurations related to cluster 1 and restart all the DB replication service deployments on all the other sites (cluster 2, cluster 3, and cluster 4):
    1. Perform the following steps to remove the remote site IP address configurations related to cluster 1 and restart all the DB replication service deployments on cluster 2:
      1. Log in to the cnDBTier cluster4.
      2. If fixed IP addresses are not configured to the LoadBalancer services, configure remotesiteip with "" for cluster 1 remote site in the custom_values.yaml file of cluster 4:
        $ vi cndbtier_cluster2_custom_values.yaml
           
        - name: cluster2-cluster1-replication-svc
          enabled: true
          --------------------------------------
          --------------------------------------
          --------------------------------------
            replication:
              localsiteip: ""
              localsiteport: "80"
              channelgroupid: "1"
              matesitename: "cluster1"
              remotesiteip: ""
              remotesiteport: "80"
      3. If fixed IP addresses are not configured to the LoadBalancer services, upgrade cnDBTier cluster 2:
        $ helm upgrade mysql-cluster occndbtier -n cluster2 -f cndbtier_cluster2_custom_values.yaml
      4. Scale down and scale up all the DB replication service deployments in cluster 2:
        $ kubectl -n cluster2 scale deploy $(kubectl -n cluster2 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=0
        $ kubectl -n cluster2 scale deploy $(kubectl -n cluster2 get deployments | awk '{print $1}' | egrep -i 'repl|monitor|backup-manager') --replicas=1
      5. Wait until all the pods are up on cluster 2 and verify:
        $ kubectl -n cluster2 get pods
    2. Repeat step a to remove the remote site IP address configurations related to cluster 1 and restart all the DB replication service deployments on cluster 3:

      Note:

      Replace cluster2 in the commands with cluster3. However, the values may vary depending on the cluster and namespace names in your setup.
    3. Repeat step a to remove the remote site IP address configurations related to cluster 1 and restart all the DB replication service deployments on cluster 4:

      Note:

      Replace cluster2 in the commands with cluster4. However, the values may vary depending on the cluster and namespace names in your setup.

7.7 Adding and Removing Replication Channel Group

This section describes the procedures to convert a single replication channel group to multiple replication channel groups and vice versa.

The procedures in this section uses the following terms to identify the clusters:
  1. cnDBTier Cluster1 (a.k.a. cluster1): The first cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
  2. cnDBTier Cluster2 (a.k.a. cluster2): The second cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
  3. cnDBTier Cluster3 (a.k.a. cluster3): The third cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
  4. cnDBTier Cluster4 (a.k.a. cluster4): The fourth cloud native based DBTier cluster in a two site, three site, or four site georeplication setup.
Prerequisites
  1. All the cnDBTier data nodes and SQL nodes that participate in georeplication, and at least one management node in the existing clusters must be up and running.
  2. Georeplication must be established between the existing cnDBTier clusters.
  3. All the cnDBTier clusters must be installed using cnDBTier 22.3.x or above.
  4. All the cnDBTier clusters must have the same number of data nodes and node groups.

Note:

  • cnDBTier supports two replication groups in each cnDBTier mate sites. To convert an existing single replication group to multi replication group, freshly install cnDBTier 22.3.x or above, or upgrade your existing cnDBTier to 22.3.x or above.
  • Before running this procedure, take a backup of the data and download the backup for safety purposes.
  • This procedure requires downtime for other cnDBTIer sites.
  • This procedure is not supported in cnDBTier setups where TLS is enabled.

7.7.1 Converting Single Replication Channel Group to Multiple Replication Channel Groups

This section describes the procedure to convert a single replication channel group to multiple replication channel groups.

Note:

This procedure is not supported for cnDBTier setups where TLS is enabled.
To convert a single replication channel group to multiple replication channel groups:
  1. Consider any one of the cnDBTier clusters as a leader cluster (cluster1) and move all the NFs' traffic to that cluster. All the pods of leader cluster must be up and running.
  2. Wait for replication to be updated in every cnDBTier cluster so that the database is consistent across all the cnDBTier clusters.
  3. Scale down the replication service deployments in each site: Log in to Bastion Host of each of the cnDBTier Clusters and scale down the DB replication service deployments:
    $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=0
    Example:
    # scaling down the DB replication service deployments of cluster1
    $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-svc scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-svc scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-svc scaled
     
    # scaling down the DB replication service deployments of cluster2
    $ kubectl -n cluster2 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster2 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster2-cluster1-repl-svc scaled
    deployment.apps/mysql-cluster-cluster2-cluster3-repl-svc scaled
    deployment.apps/mysql-cluster-cluster2-cluster4-repl-svc scaled 
     
    # scaling down the DB replication service deployments of cluster3
    $ kubectl -n cluster3 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster3 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster3-cluster1-repl-svc scaled
    deployment.apps/mysql-cluster-cluster3-cluster2-repl-svc scaled
    deployment.apps/mysql-cluster-cluster3-cluster3-repl-svc scaled
     
    # scaling down the DB replication service deployments of cluster4
    $ kubectl -n cluster4 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster4 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster4-cluster1-repl-svc scaled
    deployment.apps/mysql-cluster-cluster4-cluster2-repl-svc scaled
    deployment.apps/mysql-cluster-cluster4-cluster3-repl-svc scaled
  4. Log in to the Bastion Host of each cnDBTier cluster, and stop and reset replica of all ndbmysqld pods to gracefully stop the replication channels:
    $ kubectl -n <namespace of cnDBTier cluster> exec -ti <ndbmysqld pod name> -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    mysql> reset replica all;
    Example to stop and reset replica of all ndbmysqld pods in cluster1:
    $ kubectl -n cluster1 exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-1 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-2 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)  
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-3 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-4 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-5 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)

    Note:

    You can use the same example to stop and reset replica of all ndbmysqld pods in cluster2, cluster3, and cluster4 by replacing the cluster names with cluster2, cluster3, and cluster4 respectively.
  5. On the leader site, delete all the entries from all the tables of the replication_info database:
    $kubectl -n <namespace of leader cnDBTier cluster> exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;  
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Example with output:
    $kubectl -n cluster1 exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    Query OK, 24 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    Query OK, 12 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    Query OK, 4 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    Query OK, 32 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;
    Query OK, 0 rows affected (0.00 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Query OK, 0 rows affected (0.00 sec)
  6. Uninstall cnDBTier Cluster2 if exists.
  7. Uninstall cnDBTier Cluster3 if exists.
  8. Uninstall cnDBTier Cluster4 if exists.

    Note:

    To uninstall cnDBTier clusters in steps 6, 7 and 8, follow the "Uninstalling DBTier" procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  9. Make changes specific to multiple replication channel groups in the custom_values.yaml file by following the "Configuring Multiple Replication Channel Groups" procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  10. On updating the custom_values.yaml file, upgrade the cnDBTier leader site (cluster1) by running the following command:
    $ helm upgrade  mysql-cluster --namespace <namespace of leader cnDBTier cluster> occndbtier -f occndbtier/custom_values.yaml --no-hooks
    
    Example:
    $ helm upgrade  mysql-cluster --namespace cluster1 occndbtier -f occndbtier/custom_values.yaml --no-hooks
    
    Sample output:
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED:  Mon May 20 10:19:42 2025
    NAMESPACE: cluster1
    STATUS: deployed
    REVISION: 2
  11. Restart the management, data, and SQL pods of cnDBTier cluster by performing the following steps:
    1. Log in to the Bastion Host of cnDBTier cluster and scale down the DB replication service deployments of the cnDBTier leader site (cluster1):
      $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=0
      Example:
      # scale down the db replication service deployments of cluster1
      $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=0
      Sample output:
      deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp1 scaled
      deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp2 scaled
      deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp1 scaled
      deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp2 scaled
      deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp1 scaled
      deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp2 scaled
    2. Restart all the management pods of cnDBTier leader site (cluster1) by performing the following steps:
      1. Identify the list of management pods at cnDBTier cluster1:
        $ kubectl get pods --namespace=cluster1 | grep 'mgmd'
        Sample output:
        ndbmgmd-0                                            2/2     Running             0          9m34s
        ndbmgmd-1                                            2/2     Running             0          9m4s
      2. Delete all the management pods of cnDBTier cluster1 to restart the management pods:
        $ kubectl delete pod ndbmgmd-0 ndbmgmd-1 --namespace=cluster1
        Sample output:
        pod "ndbmgmd-0" deleted
        pod "ndbmgmd-1" deleted
      3. Wait for the management pods to restart and run the following command to check if the management pods are up and running:
        $ kubectl get pods --namespace=cluster1 | grep 'mgmd'
        Sample output:
        ndbmgmd-0                                              2/2     Running   0          4m29s
        ndbmgmd-1                                              2/2     Running   0          4m12s
      4. Check the status of cnDBTier cluster1.
        $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
            
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
            
        [mysqld(API)]   18 node(s)
        id=56   @10.233.124.92  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.114.135  (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.113.87  (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.114.32  (mysql-8.4.3 ndb-8.4.3)
        id=60   @10.233.108.33  (mysql-8.4.3 ndb-8.4.3)
        id=61   @10.233.78.230  (mysql-8.4.3 ndb-8.4.3)
        id=62   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=63   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier) 
        id=64   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=65   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=66   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=67   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host) 

        Note:

        • The API pods with node IDs 222 to 225 in the sample output are shown as "not connected" as they are added as empty API slots for georeplication recovery. You can ignore these nodes.
        • The SQL pods with node IDs 62 to 67 in the sample output are the newly added pods. These pods remain in the not connected state until all the data nodes are restarted.
    3. Restart the data pods sequentially by performing the following steps:
      1. Identify the list of data pods at cnDBTier cluster1:
        $ kubectl get pods --namespace=cluster1 | grep 'ndbmtd'
        Sample output:
        ndbmtd-0                                               3/3     Running   0          14m
        ndbmtd-1                                               3/3     Running   0          13m
      2. Delete the first data pod of cnDBTier cluster1 (in this example, ndbmtd-0) such that the pod restarts:
        $ kubectl delete pod ndbmtd-0 --namespace=cluster1
        Sample output:
        pod "ndbmtd-0" deleted
      3. Wait for the first data pod to restart and run the following command to check if the first pod is up and running:
        $ kubectl get pods --namespace=cluster1 | grep 'ndbmtd'
        Sample output:
        ndbmtd-0                                               3/3     Running   0          65s
        ndbmtd-1                                               3/3     Running   0          13m
      4. Check the status of cnDBTier cluster1:
        $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
            
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
            
        [mysqld(API)]   18 node(s) 
        id=56   @10.233.124.92  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.114.135  (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.113.87  (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.114.32  (mysql-8.4.3 ndb-8.4.3)
        id=60   @10.233.108.33  (mysql-8.4.3 ndb-8.4.3)
        id=61   @10.233.78.230  (mysql-8.4.3 ndb-8.4.3)
        id=62   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=63   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier) 
        id=64   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=65   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=66   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=67   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host) 

        Note:

        • The API pods with node IDs 222 to 225 in the sample output are shown as "not connected" as they are added as empty API slots for georeplication recovery. You can ignore these nodes.
        • The SQL pods with node IDs 62 to 67 in the sample output are the newly added pods. These pods remain in the not connected state until all the data nodes are restarted.
      5. Follow Step ii through Step iv to delete and restart each of the remaining data pods of cnDBTier cluster1.
    4. Log in to Bastion Host of the cnDBTier cluster and scale up the db replication service deployments of cnDBTier leader site (cluster1).
      $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=1
      Example:
      # scale up the db replication service deployments of cluster1
      $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=1
      Sample output:
      deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp1 scaled
      deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp2 scaled
      deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp1 scaled
      deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp2 scaled
      deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp1 scaled
      deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp2 scaled
    5. Perform the following steps to restart the monitor service pod of the leader site (cnDBTier cluster1):
      1. Identify the monitor service pod at cnDBTier cluster1:
        $ kubectl get pods --namespace=cluster1 | grep 'monitor'
        Sample output:
        mysql-cluster-db-monitor-svc-8bf9448b8-4cghv           1/1     Running   0                13h
      2. Delete the monitor service pod of cnDBTier cluster1:
        $ kubectl delete pod mysql-cluster-db-monitor-svc-8bf9448b8-4cghv --namespace=cluster1
        Sample output:
        pod "mysql-cluster-db-monitor-svc-8bf9448b8-4cghv" deleted
      3. Wait until the monitor service pod is up and running. You can check the status of the pod by running the following command:
        $ kubectl get pods --namespace=cluster1 | grep 'monitor'
        Sample output:
        mysql-cluster-db-monitor-svc-8bf9448b8-w8pvf           1/1     Running   0                109s
  12. Wait until all the pods of the leader site are up and running. Verify if the data node, API nodes, and management nodes are connected to the cnDBTier cluster by running the following command:
    # Checking the status of MySQL NDB Cluster in cnDBTier cluster
    $ kubectl -n <namespace of leader cnDBTier Cluster> exec -it ndbmgmd-0 -- ndb_mgm -e show
    Example:
    $ kubectl -n cluster1 exec ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.92.53  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=2    @10.233.72.66  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.111.62  (mysql-8.4.3 ndb-8.4.3)
    id=50   @10.233.117.59  (mysql-8.4.3 ndb-8.4.3)
      
    [mysqld(API)]  10 node(s)
    id=56   @10.233.72.65  (mysql-8.4.3 ndb-8.4.3)
    id=57   @10.233.92.51  (mysql-8.4.3 ndb-8.4.3)
    id=58   @10.233.81.112  (mysql-8.4.3 ndb-8.4.3)
    id=59   @10.233.64.07  (mysql-8.4.3 ndb-8.4.3)
    id=60   @10.233.71.16  (mysql-8.4.3 ndb-8.4.3)
    id=61   @10.233.114.196  (mysql-8.4.3 ndb-8.4.3)
    id=62   @10.233.84.212  (mysql-8.4.3 ndb-8.4.3)
    id=63   @10.233.108.21  (mysql-8.4.3 ndb-8.4.3)
    id=64   @10.233.121.20  (mysql-8.4.3 ndb-8.4.3)
    id=65   @10.233.109.231  (mysql-8.4.3 ndb-8.4.3)
    id=66   @10.233.121.37  (mysql-8.4.3 ndb-8.4.3)
    id=67   @10.233.84.38  (mysql-8.4.3 ndb-8.4.3)
    id=70   @10.233.124.92  (mysql-8.4.3 ndb-8.4.3)
    id=71   @10.233.113.109  (mysql-8.4.3 ndb-8.4.3)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    Node IDs 222 to 225 in the sample output are shown as "not connected" as they are added as empty API slots for georeplication recovery. You can ignore these nodes.
  13. Upgrade the leader cnDBTier cluster using the custom_values.yaml file that you updated in Step 9. For more information on upgrading cnDBTier, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  14. Scale down the replication services of cnDBTier leader site (cluster1). Log in to Bastion Host of cnDBTier Cluster and scale down the DB replication service deployments:
    $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=0
    For example, run the following command to scale down the DB replication service deployments of cluster1:
    $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=0
    Sample output:
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp2 scaled
  15. Delete all entries from all the tables of replication_info database on the leader site:
    $kubectl -n <namespace of leader cnDBTier cluster> exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;  
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Example with output:
    $kubectl -n cluster1 exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    Query OK, 24 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    Query OK, 12 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    Query OK, 4 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    Query OK, 32 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;
    Query OK, 0 rows affected (0.00 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Query OK, 0 rows affected (0.00 sec)
  16. Scale up the replication services of cnDBTier leader site (cluster1). Log in to Bastion Host of cnDBTier cluster and scale up the DB replication service deployments:
    $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=1
    For example, run the following command to scale up the DB replication service deployments of cluster1:
    $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=1
    Sample output:
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp2 scaled
  17. Reinstall cnDBTier Cluster2 by following the Adding cnDBTier Georedundant Cluster to Single Site cnDBTier Cluster procedure.
  18. Reinstall cnDBTier Cluster3 by following the Adding cnDBTier Georedundant Cluster to Two Site cnDBTier Clusters procedure.
  19. Reinstall cnDBTier Cluster4 by following the Adding cnDBTier Georedundant Cluster to Four Site cnDBTier Clusters procedure.

7.7.2 Converting Multiple Replication Channel Groups to Single Replication Channel Group

This section describes the procedure to convert multiple replication channel groups to single replication channel group.

Note:

This procedure is not supported for cnDBTier setups where TLS is enabled.

This procedure can be used if multiple replication channel groups are enabled in the cnDBTier cluster and if you want to convert it to a single replication group due to a rollback to the previous versions or any other reasons.

To convert multiple replication channel groups to single replication channel group:
  1. Consider any one of the cnDBTier clusters as a leader cluster (cluster1 ) and move all the NFs' traffic to that cluster. All the pods of leader cluster must be up and running.
  2. Wait for replication to be updated in every cnDBTier cluster so that the database is consistent across all the cnDBTier clusters.
  3. Scale down the replication service deployments in each site: Log in to Bastion Host of each of the cnDBTier Clusters and scale down the DB replication service deployments:
    $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=0
    Example:
    # scaling down the DB replication service deployments of cluster1
    $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-grp2 scaled
     
    # scaling down the DB replication service deployments of cluster2 If exists
    $ kubectl -n cluster2 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster2 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster2-cluster1-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster2-cluster1-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster2-cluster3-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster2-cluster3-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster2-cluster4-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster2-cluster4-repl-grp2 scaled
     
    # scaling down the DB replication service deployments of cluster3 If exists
    $ kubectl -n cluster3 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster3 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster3-cluster1-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster3-cluster1-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster3-cluster2-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster3-cluster2-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster3-cluster4-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster3-cluster4-repl-grp2 scaled
     
    # scaling down the DB replication service deployments of cluster4 If exists
    $ kubectl -n cluster4 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster4 scale deployment --replicas=0
    deployment.apps/mysql-cluster-cluster4-cluster1-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster4-cluster1-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster4-cluster2-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster4-cluster2-repl-grp2 scaled
    deployment.apps/mysql-cluster-cluster4-cluster3-repl-grp1 scaled
    deployment.apps/mysql-cluster-cluster4-cluster3-repl-grp2 scaled
  4. Log in to the Bastion Host of each cnDBTier cluster, and stop and reset replica of all ndbmysqld pods to gracefully stop the replication channels:
    $ kubectl -n <namespace of cnDBTier cluster> exec -ti <ndbmysqld pod name> -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    mysql> reset replica all;
    Example to stop and reset replica of all ndbmysqld pods in cluster1:
    $ kubectl -n cluster1 exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
      
    $ kubectl -n cluster1 exec -ti ndbmysqld-1 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-2 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
      
    $ kubectl -n cluster1 exec -ti ndbmysqld-3 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-4 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
      
    $ kubectl -n cluster1 exec -ti ndbmysqld-5 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-6 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
      
    $ kubectl -n cluster1 exec -ti ndbmysqld-7 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-8 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
      
    $ kubectl -n cluster1 exec -ti ndbmysqld-9 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
     
    $ kubectl -n cluster1 exec -ti ndbmysqld-10 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)
      
    $ kubectl -n cluster1 exec -ti ndbmysqld-11 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> stop replica;
    Query OK, 0 rows affected (0.01 sec)
    mysql> reset replica all;
    Query OK, 0 rows affected (0.01 sec)

    Note:

    You can use the same example to stop and reset replica of all ndbmysqld pods in cluster2, cluster3, and cluster4 by replacing the cluster names with cluster2, cluster3, and cluster4 respectively.
  5. On the leader site, delete all the entries from all the tables of the replication_info database:
    $kubectl -n <namespace of leader cnDBTier cluster> exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;  
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Example with output:
    $kubectl -n cluster1 exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    Query OK, 4 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    Query OK, 2 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    Query OK, 2 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    Query OK, 8 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;
    Query OK, 0 rows affected (0.00 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Query OK, 0 rows affected (0.00 sec)
  6. Uninstall cnDBTier Cluster2 if exists.
  7. Uninstall cnDBTier Cluster3 if exists.
  8. Uninstall cnDBTier Cluster4 if exists.

    Note:

    To uninstall cnDBTier clusters in steps 6, 7 and 8, follow the "Uninstalling DBTier" procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  9. Remove or disable the configurations specific to multiple replication channel groups from the custom_values.yaml file.
  10. On updating the custom_values.yaml file, upgrade the cnDBTier leader site (cluster1) by running the following command:
    $ helm upgrade  mysql-cluster --namespace <namespace of leader cnDBTier cluster> occndbtier -f occndbtier/custom_values.yaml --no-hooks
    
    Example:
    $ helm upgrade  mysql-cluster --namespace cluster1 occndbtier -f occndbtier/custom_values.yaml --no-hooks
    
    Sample output:
    Release "mysql-cluster" has been upgraded. Happy Helming!
    NAME: mysql-cluster
    LAST DEPLOYED:  Mon May 20 10:19:42 2025
    NAMESPACE: cluster1
    STATUS: deployed
    REVISION: 2
  11. Restart the management, data, and SQL pods of cnDBTier cluster by performing the following steps:
    1. Log in to the Bastion Host of cnDBTier cluster and scale down the DB replication service deployments of the cnDBTier leader site (cluster1):
      $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=0
      Example:
      # scale down the db replication service deployments of cluster1
      $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=0
      Sample output:
      deployment.apps/mysql-cluster-cluster1-cluster2-repl-svc scaled
      deployment.apps/mysql-cluster-cluster1-cluster3-repl-svc scaled
      deployment.apps/mysql-cluster-cluster1-cluster4-repl-svc scaled
      
    2. Restart all the management pods of cnDBTier leader site (cluster1) by performing the following steps:
      1. Identify the list of management pods at cnDBTier cluster1:
        $ kubectl get pods --namespace=cluster1 | grep 'mgmd'
        Sample output:
        ndbmgmd-0                                            2/2     Running             0          9m34s
        ndbmgmd-1                                            2/2     Running             0          9m4s
      2. Delete all the management pods of cnDBTier cluster1 to restart the management pods:
        $ kubectl delete pod ndbmgmd-0 ndbmgmd-1 --namespace=cluster1
        Sample output:
        pod "ndbmgmd-0" deleted
        pod "ndbmgmd-1" deleted
      3. Wait for the management pods to restart and run the following command to check if the management pods are up and running:
        $ kubectl get pods --namespace=cluster1 | grep 'mgmd'
        Sample output:
        ndbmgmd-0                                              2/2     Running   0          4m29s
        ndbmgmd-1                                              2/2     Running   0          4m12s
      4. Check the status of cnDBTier cluster1.
        $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
            
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
            
        [mysqld(API)]   18 node(s)
        id=56   @10.233.124.92  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.114.135  (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.113.87  (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.114.32  (mysql-8.4.3 ndb-8.4.3)
        id=60   @10.233.108.33  (mysql-8.4.3 ndb-8.4.3)
        id=61   @10.233.78.230  (mysql-8.4.3 ndb-8.4.3)
        id=62   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=63   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier) 
        id=64   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=65   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=66   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=67   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)

        Note:

        • The API pods with node IDs 222 to 225 in the sample output are shown as "not connected" as they are added as empty API slots for georeplication recovery. You can ignore these nodes.
        • The SQL pods with node IDs 62 to 67 in the sample output are the newly added pods. These pods remain in the not connected state until all the data nodes are restarted.
    3. Restart the data pods sequentially by performing the following steps:
      1. Identify the list of data pods at cnDBTier cluster1:
        $ kubectl get pods --namespace=cluster1 | grep 'ndbmtd'
        Sample output:
        ndbmtd-0                                               3/3     Running   0          14m
        ndbmtd-1                                               3/3     Running   0          13m
      2. Delete first data pod of cnDBTier cluster1 (ndbmtd-0) such that the pod restarts:
        $ kubectl delete pod ndbmtd-0 --namespace=cluster1
        Sample output:
        pod "ndbmtd-0" deleted
      3. Wait for the first data pod to restart and run the following command to check if the first pod is up and running:
        $ kubectl get pods --namespace=cluster1 | grep 'ndbmtd'
        Sample output:
        ndbmtd-0                                               3/3     Running   0          65s
        ndbmtd-1                                               3/3     Running   0          13m
      4. Check the status of cnDBTier cluster1:
        $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
        Sample output:
        Connected to Management Server at: localhost:1186 Cluster Configuration
        ---------------------
        [ndbd(NDB)]     2 node(s)
        id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
        id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
            
        [ndb_mgmd(MGM)] 2 node(s)
        id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
        id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
            
        [mysqld(API)]   18 node(s)
        id=56   @10.233.124.92  (mysql-8.4.3 ndb-8.4.3)
        id=57   @10.233.114.135  (mysql-8.4.3 ndb-8.4.3)
        id=58   @10.233.113.87  (mysql-8.4.3 ndb-8.4.3)
        id=59   @10.233.114.32  (mysql-8.4.3 ndb-8.4.3)
        id=60   @10.233.108.33  (mysql-8.4.3 ndb-8.4.3)
        id=61   @10.233.78.230  (mysql-8.4.3 ndb-8.4.3)
        id=62   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=63   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier) 
        id=64   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=65   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=66   (not connected, accepting connect from ndbmysqld-2.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=67   (not connected, accepting connect from ndbmysqld-3.ndbmysqldsvc.cluster1.svc.occne4-cgbu-cne-dbtier)
        id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
        id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
        id=222 (not connected, accepting connect from any host)
        id=223 (not connected, accepting connect from any host)
        id=224 (not connected, accepting connect from any host)
        id=225 (not connected, accepting connect from any host)

        Note:

        • The API pods with node IDs 222 to 225 in the sample output are shown as "not connected" as they are added as empty API slots for georeplication recovery. You can ignore these nodes.
        • The SQL pods with node IDs 62 to 67 in the sample output are the newly added pods. These pods remain in the not connected state until all the data nodes are restarted.
      5. Follow Step ii through Step iv to delete and restart the second data pod of cnDBTier cluster1 (ndbmtd-1).
    4. Log in to Bastion Host of the cnDBTier cluster and scale up the db replication service deployments of cnDBTier leader site (cluster1).
      $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=1
      Example:
      # scale up the db replication service deployments of cluster1
      $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=1
      Sample output:
      deployment.apps/mysql-cluster-cluster1-cluster2-repl-svc scaled
      deployment.apps/mysql-cluster-cluster1-cluster3-repl-svc scaled
      deployment.apps/mysql-cluster-cluster1-cluster4-repl-svc scaled
    5. Perform the following steps to restart the monitor service pod of the leader site (cnDBTier cluster1):
      1. Identify the monitor service pod at cnDBTier cluster1:
        $ kubectl get pods --namespace=cluster1 | grep 'monitor'
        Sample output:
        mysql-cluster-db-monitor-svc-8bf9448b8-4cghv           1/1     Running   0                13h
      2. Delete the monitor service pod of cnDBTier cluster1:
        $ kubectl delete pod mysql-cluster-db-monitor-svc-8bf9448b8-4cghv --namespace=cluster1
        Sample output:
        pod "mysql-cluster-db-monitor-svc-8bf9448b8-4cghv" deleted
      3. Wait until the monitor service pod is up and running. You can check the status of the pod by running the following command:
        $ kubectl get pods --namespace=cluster1 | grep 'monitor'
        Sample output:
        mysql-cluster-db-monitor-svc-8bf9448b8-w8pvf           1/1     Running   0                109s
  12. Wait until all the pods of the leader site are up and running. Verify if the data node, API nodes, and management nodes are connected to the cnDBTier cluster by running the following command:
    $ kubectl -n <namespace of leader cnDBTier Cluster> exec -it ndbmgmd-0 -- ndb_mgm -e show
    Example:
    $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
    Sample output:
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=1    @10.233.92.53  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
    id=2    @10.233.72.66  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
       
    [ndb_mgmd(MGM)] 2 node(s)
    id=49   @10.233.111.62  (mysql-8.4.3 ndb-8.4.3)
    id=50   @10.233.117.59  (mysql-8.4.3 ndb-8.4.3)
       
    [mysqld(API)]  10 node(s)
    id=56   @10.233.72.65  (mysql-8.4.3 ndb-8.4.3)
    id=57   @10.233.92.51  (mysql-8.4.3 ndb-8.4.3)
    id=58   @10.233.113.87  (mysql-8.4.3 ndb-8.4.3)
    id=59   @10.233.114.32  (mysql-8.4.3 ndb-8.4.3)
    id=60   @10.233.108.33  (mysql-8.4.3 ndb-8.4.3)
    id=61   @10.233.78.230  (mysql-8.4.3 ndb-8.4.3)
    id=70   @10.233.72.64  (mysql-8.4.3 ndb-8.4.3)
    id=71   @10.233.92.52  (mysql-8.4.3 ndb-8.4.3)
    id=222 (not connected, accepting connect from any host)
    id=223 (not connected, accepting connect from any host)
    id=224 (not connected, accepting connect from any host)
    id=225 (not connected, accepting connect from any host)

    Note:

    Node IDs 222 to 225 in the sample output are shown as "not connected" as they are added as empty API slots for georeplication recovery. You can ignore these nodes.
  13. Upgrade the leader cnDBTier cluster using the custom_values.yaml file that you updated in Step 9. For more information on upgrading cnDBTier, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  14. Scale down the replication services of cnDBTier leader site (cluster1). Log in to Bastion Host of cnDBTier Cluster and scale down the DB replication service deployments:
    $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=0
    For example, run the following command to scale down the DB replication service deployments of cluster1:
    $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=0
    Sample output:
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-svc scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-svc scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-svc scaled
  15. Delete all entries from all the tables of replication_info database on the leader site:
    $kubectl -n <namespace of leader cnDBTier cluster> exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;  
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Example with output:
    $kubectl -n cluster1 exec -ti ndbmysqld-0 -- mysql -h 127.0.0.1 -uroot -pNextGenCne
    mysql> DELETE from replication_info.DBTIER_REPLICATION_CHANNEL_INFO;
    Query OK, 4 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_SITE_INFO;
    Query OK, 2 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_SITE_INFO;
    Query OK, 2 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_INITIAL_BINLOG_POSTION;
    Query OK, 8 rows affected (0.01 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_ERROR_SKIP_INFO;
    Query OK, 0 rows affected (0.00 sec)
    mysql> DELETE from replication_info.DBTIER_REPL_EVENT_INFO;
    Query OK, 0 rows affected (0.00 sec)
    
  16. Scale up the replication services of cnDBTier leader site (cluster1). Log in to Bastion Host of cnDBTier cluster and scale up the DB replication service deployments:
    $ kubectl -n <namespace of cnDBTier cluster> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <namespace of cnDBTier cluster> scale deployment --replicas=1
    For example, run the following command to scale up the DB replication service deployments of cluster1:
    $ kubectl -n cluster1 get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n cluster1 scale deployment --replicas=1
    Sample output:
    deployment.apps/mysql-cluster-cluster1-cluster2-repl-svc scaled
    deployment.apps/mysql-cluster-cluster1-cluster3-repl-svc scaled
    deployment.apps/mysql-cluster-cluster1-cluster4-repl-svc scaled
  17. Reinstall cnDBTier Cluster2 by following the Adding cnDBTier Georedundant Cluster to Single Site cnDBTier Cluster procedure.
  18. Reinstall cnDBTier Cluster3 by following the Adding cnDBTier Georedundant Cluster to Two Site cnDBTier Clusters procedure.
  19. Reinstall cnDBTier Cluster4 by following the Adding cnDBTier Georedundant Cluster to Three Site cnDBTier Clusters procedure.

7.8 Managing Passwords and Secrets

This section provides the procedures to manage cnDBTier passwords and secrets.

7.8.1 Modifying cnDBTier Password Encryption Key

Encryption key is used to encrypt the replication username and password stored in the database using the Advanced Encryption Standard (AES) algorithm. This section provides the procedure to modify the encryption key used to encrypt the replication username and password.

Note:

If you update an encryption key on one site, ensure that you use the same encryption key across all the other mate sites.
  1. Run the hooks.sh script with the --update-encryption-key flag to change the encryption secret and encrypt the replication username and password using the new encryption key.

    Note:

    Replace the values of the environment variables in the commands with the values corresponding to your cluster.
    export OCCNE_NAMESPACE="occne-cndbtier"
    export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service"
    export MYSQL_USERNAME="occneuser"
    export MYSQL_PASSWORD="<password for the user occneuser>"
    export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0/ndbappmysqld-0 pod name> -- mysql --binary-as-hex=0 --show-warnings"
    export NEW_ENCRYPTION_KEY="<new_encryption_password>"
    export DBTIER_REPLICATION_SVC_DATABASE="replication_info"
    export APP_STS_NAME="ndbappmysqld"
    # Optional: Uncomment the line below to decrypt the encrypted credentials with a custom key.
    # export CURRENT_ENCRYPTION_KEY="<current_encryption_password>"
    
    
    occndbtier/files/hooks.sh --update-encryption-key
  2. Perform rollout restart of ndbmysqld on the cnDBTier cluster:
    1. Run the following command to fetch the statefulset name of ndbmysqld:
      $ kubectl get statefulset --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl get statefulset --namespace=occne-cndbtier
      Sample output:
      NAME           READY   AGE
      ndbappmysqld   2/2     27h
      ndbmgmd        2/2     27h
      ndbmtd         2/2     27h
      ndbmysqld      2/2     27h
    2. Perform rollout restart of ndbmsqld:
      $ kubectl rollout restart statefulset <statefulset name of ndbmysqld> --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl rollout restart statefulset ndbmysqld  --namespace=occne-cndbtier
      Sample output:
      statefulset.apps/ndbmysqld restarted
    3. Wait until the rollout restart of ndbmysqld pod completes. Run the following command to check the status:
      $ kubectl rollout status statefulset <statefulset name of ndbmysqld> --namespace=<namespace of cnDBTier cluster>
      For example:
      $ kubectl rollout status statefulset ndbmysqld  --namespace=occne-cndbtier
      Sample output:
      Waiting for 1 pods to be ready...
      waiting for statefulset rolling update to complete 1 pods at revision ndbmysqld-7c6b9c9f84...
      Waiting for 1 pods to be ready...
      Waiting for 1 pods to be ready...
      statefulset rolling update complete 2 pods at revision ndbmysqld-7c6b9c9f84...
  3. After the rollout restart, run the following command to ensure that the replication is UP on all cnDBTier sites:
    $ kubectl -n <namespace of cnDBTier cluster> exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.<namespace of cnDBTier cluster>:8080/db-tier/status/replication/realtime

    where, <namespace> is the namespace name of the of cnDBTier cluster.

    The value of replicationStatus in the output indicates if the local site is able to replicate data from that remote site:
    • "UP": Indicates that the local site is able to replicate data from that remote site.
    • "DOWN": Indicates that the local site is not able to replicate data from the respective remote site.
    For example, run the following command to check the georeplication status of cnDBTier cluster2 configured with other remote sites:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- curl http://mysql-cluster-db-monitor-svc.cluster2:8080/db-tier/status/replication/realtime
    Sample output:
    [
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster1",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster3",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      },
      {
        "localSiteName": "cluster2",
        "remoteSiteName": "cluster4",
        "replicationStatus": "UP",
        "secondsBehindRemote": 0,
        "replicationGroupDelay": [
          {
            "replchannel_group_id": "1",
            "secondsBehindRemote": 0
          }
        ]
      }
    ]

    In the sample output, the value of remotesiteName is "UP" for the localSiteName cluster1, cluster3, and cluster4. This indicates that the cluster2 for localSiteName cluster2 is able to replicate data from remotesiteName cluster1, cluster3, and cluster4.

  4. Repeat steps 1, 2, and 3 on the other cnDBTier mate sites.

7.8.2 Changing cnDBTier Passwords

This section provides the procedures to change cnDBTier passwords such as root user, occneuser, and occnerepluser passwords in a stand-alone and multisite georeplication setup using the dbtpasswd bash script.

Note:

  • The password changes are applicable only to the current site and are not replicated in the mate sites.
  • Ensure that your password meets the following password policy requirements:
    • Password must be between 20 and 32 characters in length.
    • Password must contain at least one lower case letter.
    • Password must contain at least one upper case letter.
    • Password must include at least one digit.
    • Password must include at least one of the following special characters: ,%~+.:_/-
  • Any character that does not meet the complexity requirements mentioned in the previous note is not supported as it may break the functionality of the dbtpasswd script.
Changing a cnDBTier password involves the following steps:
  1. Adding a new password in MySQL.

    Note:

    The old password is active until the pods are restarted and the transitional operations are complete.
  2. Replacing the current or old password with the new password in Kubernetes secret.
  3. Restarting the configured cnDBTier pods to use the new Kubernetes secret (This step is not applicable while changing an NF password).
  4. Discarding the old password in MySQL.

The dbtpasswd bash script automates these steps and provides a single script to change all cnDBTier passwords at once. The script also provides options to run selective steps as changing passwords often requires running selective transitional operations, such as restarting pods, while both passwords are still valid.

You can use dbtpasswd to:
  • change one or more cnDBTier passwords in a single site or cluster. Changing a cnDBTier password on a site includes changing the password in MySQL and Kubernetes secret, restarting all required cnDBTier pods, and updating passwords on cnDBTier database tables if necessary.
  • change passwords on a live cluster with no service interruption.
  • change NF passwords. However, when changing an NF password, dbtpasswd can change the password in the Kubernetes secret and database only. You have to manually restart NF pods as a separate user action.
To provide flexibility and support for changing non-cnDBTier passwords (that is, NF passwords), dbtpasswd provides a range of options for controlling the phases of execution. For example, you can configure dbtpasswd to only add the new password to the database, change it in the Kubernetes secret, and then stop. After running the other operations, you can configure the script and run it again to discard the old password only.

MySQL Dual Password

The dbtpasswd script uses the MySQL Dual Password feature to first add the new password and then discard the old MySQL password after database values and secrets are changed and the pods are restarted. Both the passwords are valid until the old password is discarded.

Note:

MySQL Dual Password is not supported for changing the root password.

Support for Non-cnDBTier Secrets on Different Namespace

To support changing NF passwords, that have their secrets on a namespace different from the cnDBTier namespace, dbtpasswd provides the --nf-namespace to configure the namespace in which the secrets are stored. For more information about this option, see Configuration Options.

7.8.2.1 Prerequisites
Before changing the cnDBTier passwords ensure that the following prerequisites are met:
  • The dbtpasswd scripti must be properly installed. For more information, see Installing and Using dbtpasswd Script.
  • If you are changing the replication password in a multisite setup, then ensure that the DB Replication is up and running between all of the sites in the system.
7.8.2.2 Installing and Using dbtpasswd Script

This section provides details about installing and using the dbtpasswd script to change cnDBTier passwords.

Installing the dbtpasswd Script

Run the following commands to source and add the bin directory containing the dbtpasswd program to the user's path.
cd Artifacts/Scripts/tools
source ./source_me

The system prompts you to enter the namespace and uses the same to set the DBTIER_NAMESPACE. It also sets the DBTIER_LIB environment variable with the path to the directory containing the libraries needed by dbtpasswd.

Using the dbtpasswd Script

You can use the dbtpasswd script in the following ways:
dbtpasswd [-h | --help]
dbtpasswd [OPTIONS] [SECRET...]
7.8.2.3 Configuration Options
This section describes the list of options that are available to configure the dbtpasswd script.

Table 7-1 Configuration Options

Option Usage Example
-h | --help This option is used to print the help message and exit.
dbtpasswd --help
-v | --version This option is used to print script's version.
dbtpasswd --version
--debug This option is used to output DEBUG log message to standard error, stderr.
dbtpasswd --debug
--skip-namespace-test This option is used to skip testing that the namespace in DBTIER_NAMESPACE exists in the current cluster.
dbtpasswd --skip-namespace-test
--rollout-watch-timeout=0s This option is used to define the time to wait before ending the rollout status watch.

Set this value to zero if you donot want to wait before ending the rollout status watch. Use a corresponding time unit to set other values. For example: 1s, 2m, 3h.

dbtpasswd --rollout-watch-timeout=0s
--no-checks This option is used to skip verifying if replication to mate sites is UP, when updating a replication password.
dbtpasswd --no-checks
--no-discard This option is used to change a password without discarding the old password. To change all cnDBTier passwords, but retain the old passwords:
dbtpasswd --no-discard
--discard-only This option is used to only discard the old password.
dbtpasswd --discard-only
--secrets-only This option is used to change the password in the secrets only.
dbtpasswd --secrets-only
--mysql-only This option is used to change the passwords in MySQL only.

Note: The current secret password must be the current MySQL password. Therefore, if you are using the --mysql-only and --secrets-only options to change the passwords, you must first change the MySQL password and then the secret password.

dbtpasswd --mysql-only
--secrets-and-mysql-only This option is used to change the passwords in the secrets and MySQL only.
dbtpasswd --secrets-and-mysql-only
--restart-only This option is used to only restart the pods configured to use cnDBTier secrets and doesn't change the passwords.
dbtpasswd --restart-only
--nf-namespace=<namespace_where_secrets_are> Non-cnDBTier secrets may be stored on a namespace different from the cnDBTier namespace. This option is used to provide the details of the namespace where the non-cnDBTier secrets are stored to support changing NF passwords.

Note: All non-cnDBTier secrets must be stored in the NF namespace. All cnDBTier secrets must be stored in DBTIER_NAMESPACE.

  • To change a non-DBTier password in its secret, which is on a different namespace, and in mysql:
    dbtpasswd --secrets-and-mysql-only --nf-namespace=other-namespace-name nf-secret-on-diff-namespace
  • To discard a non-DBTier old password whose secret is on a different namespace:
    dbtpasswd --discard-only --nf-namespace=other-namespace-name nf-secret-on-diff-namespace
    
  • To change a non-DBTier password in its secret, which is on same namespace as cnDBTier, and in mysql:
    dbtpasswd --secrets-and-mysql-only nf-secret-on-same-namespace
    
  • To discard a non-DBTier old password whose secret is on the same namespace as cnDBTier:
    dbtpasswd --discard-only nf-secret-on-same-namespace
    

Note:

Use the dbtpasswd --help command to refer to more examples.
7.8.2.4 Changing All cnDBTier Passwords in a Site
Run the following command to change all the cnDBTier passwords in a cnDBTier site:
$ dbtpasswd
Sample output:
2022-12-23T21:54:24Z INFO - Changing password for user root
Current password:
Enter new password:
Enter new password again:
2022-12-23T21:54:39Z INFO - Changing password for user occneuser
Current password:
Enter new password:
Enter new password again:
2022-12-23T21:54:58Z INFO - Changing password for user occnerepluser
Current password:
Enter new password:
Enter new password again:
2022-12-23T21:55:05Z INFO - Getting sts and sts pod info...
2022-12-23T21:55:12Z INFO - MGM_STS="ndbmgmd"
2022-12-23T21:55:12Z INFO - MGM_REPLICAS="2"
2022-12-23T21:55:12Z INFO -
    ndbmgmd-0
    ndbmgmd-1
2022-12-23T21:55:18Z INFO - NDB_STS="ndbmtd"
2022-12-23T21:55:18Z INFO - NDB_REPLICAS="2"
2022-12-23T21:55:18Z INFO -
    ndbmtd-0
    ndbmtd-1
2022-12-23T21:55:25Z INFO - API_STS="ndbmysqld"
2022-12-23T21:55:25Z INFO - API_REPLICAS="2"
2022-12-23T21:55:25Z INFO -
    ndbmysqld-0
    ndbmysqld-1
2022-12-23T21:55:29Z INFO - APP_STS="ndbappmysqld"
2022-12-23T21:55:29Z INFO - APP_REPLICAS="2"
2022-12-23T21:55:29Z INFO -
    ndbappmysqld-0
    ndbappmysqld-1
2022-12-23T21:55:29Z INFO - Getting deployment pod info...
2022-12-23T21:55:29Z INFO - grepping for backup-man (BAK_CHART_NAME)...
2022-12-23T21:55:36Z INFO -
    mysql-cluster-db-backup-manager-svc-7bb947f5f9-nc45s
2022-12-23T21:55:36Z INFO -
    mysql-cluster-db-backup-manager-svc
2022-12-23T21:55:36Z INFO - grepping for db-mon (MON_CHART_NAME)...
2022-12-23T21:55:42Z INFO -
    mysql-cluster-db-monitor-svc-5cff4bf789-g86rr
2022-12-23T21:55:42Z INFO -
    mysql-cluster-db-monitor-svc
2022-12-23T21:55:42Z INFO - grepping for replicat (REP_CHART_NAME)...
2022-12-23T21:55:47Z INFO -
    mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-b4f8d9g6hbc
2022-12-23T21:55:47Z INFO -
    mysql-cluster-lfg-site-1-lfg-site-2-replication-svc
2022-12-23T21:55:47Z INFO - Labeling pods with dbtier-app...
pod/ndbmgmd-0 not labeled
pod/ndbmgmd-1 not labeled
pod/ndbmtd-0 not labeled
pod/ndbmtd-1 not labeled
pod/ndbappmysqld-0 not labeled
pod/ndbappmysqld-1 not labeled
pod/ndbmysqld-0 not labeled
pod/ndbmysqld-1 not labeled
pod/mysql-cluster-db-backup-manager-svc-7bb947f5f9-nc45s not labeled
pod/mysql-cluster-db-monitor-svc-5cff4bf789-g86rr not labeled
pod/mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-b4f8d9g6hbc not labeled
2022-12-23T21:56:08Z INFO - Pods labeled with dbtier-app
2022-12-23T21:56:08Z INFO - Verifying Geo Replication to mates...
2022-12-23T21:56:18Z INFO - Geo Replication to mates is UP
2022-12-23T21:56:18Z INFO - Changing mysql password for 'root'@'localhost'...
mysql: [Warning] Using a password on the command line interface can be insecure.
2022-12-23T21:56:29Z INFO - Mysql password changed
2022-12-23T21:56:33Z INFO - Patching secret, occne-mysqlndb-root-secret, with new password
secret/occne-mysqlndb-root-secret patched
2022-12-23T21:56:36Z INFO - Secret, occne-mysqlndb-root-secret, patched with new password
2022-12-23T21:56:36Z INFO - Adding new mysql password for 'occneuser'@'%'...
mysql: [Warning] Using a password on the command line interface can be insecure.
2022-12-23T21:56:48Z INFO - New mysql password added
2022-12-23T21:56:54Z INFO - Patching secret, occne-secret-db-monitor-secret, with new password
secret/occne-secret-db-monitor-secret patched
2022-12-23T21:56:58Z INFO - Secret, occne-secret-db-monitor-secret, patched with new password
2022-12-23T21:56:58Z INFO - Adding new mysql password for 'occnerepluser'@'%'...
mysql: [Warning] Using a password on the command line interface can be insecure.
2022-12-23T21:57:10Z INFO - New mysql password added
2022-12-23T21:57:10Z INFO - Changing password in replication_info.DBTIER_REPL_SITE_INFO table...
2022-12-23T21:57:16Z INFO - Using replication pod: mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-b4f8d9g6hbc
2022-12-23T21:57:16Z INFO - Using replication pod container: lfg-site-1-lfg-site-2-replication-svc
2022-12-23T21:57:16Z INFO - MYSQL_REPLICATION_SITE_NAME = lfg-site-1
mysql: [Warning] Using a password on the command line interface can be insecure.
2022-12-23T21:57:28Z INFO - Password changed in replication_info.DBTIER_REPL_SITE_INFO table
2022-12-23T21:57:35Z INFO - Patching secret, occne-replication-secret-db-replication-secret, with new password
secret/occne-replication-secret-db-replication-secret patched
2022-12-23T21:57:38Z INFO - Secret, occne-replication-secret-db-replication-secret, patched with new password
2022-12-23T21:57:38Z INFO - Starting rollover restarts at 2022-12-23T21:57:38Z
2022-12-23T21:57:38Z INFO - number of db-replication-svc deployments: 1
2022-12-23T21:57:38Z INFO - Patching deployment 0: mysql-cluster-lfg-site-1-lfg-site-2-replication-svc...
deployment.apps/mysql-cluster-lfg-site-1-lfg-site-2-replication-svc patched (no change)
2022-12-23T21:57:41Z INFO - Rollout restarting mysql-cluster-lfg-site-1-lfg-site-2-replication-svc...
deployment.apps/mysql-cluster-lfg-site-1-lfg-site-2-replication-svc restarted
...
2022-12-23T22:01:53Z INFO - ndbmtd pods rollout restarted
...
2022-12-23T22:02:11Z INFO - ndbappmysqld pods rollout restarted
...
2022-12-23T22:02:27Z INFO - ndbmysqld pods rollout restarted
...
2022-12-23T22:02:48Z INFO - db-backup-manager-svc pods rollout restarted
...
2022-12-23T22:03:08Z INFO - db-monitor-svc pods rollout restarted
2022-12-23T22:03:08Z INFO - Discarding old mysql password for 'occneuser'@'%'...
mysql: [Warning] Using a password on the command line interface can be insecure.
2022-12-23T22:03:20Z INFO - Old mysql password discarded
2022-12-23T22:03:20Z INFO - Discarding old mysql password for 'occnerepluser'@'%'...
mysql: [Warning] Using a password on the command line interface can be insecure.
2022-12-23T22:03:30Z INFO - Old mysql password discarded
2022-12-23T22:03:30Z INFO - Password(s) updated successfully
7.8.2.5 Changing All Passwords in a Stand-Alone or Georeplication Site

This section provides the command to change all cnDBTier passwords in a stand-alone site or a site where Georeplication has been configured but the mate sites are not configured.

$ dbtpasswd
Sample output:
2023-10-12T21:34:05Z INFO - DBTIER_NAMESPACE = occne-cndbtier
2023-10-12T21:34:05Z INFO - Testing namespace, occne-cndbtier, exists...
2023-10-12T21:34:05Z INFO - Should be able to see namespace, occne-cndbtier, with "kubectl get ns -o name occne-cndbtier" - PASSED
2023-10-12T21:34:05Z INFO - Namespace, occne-cndbtier, exists
2023-10-12T21:34:05Z INFO - Changing password for user root 2023-01-03T19:06:31Z INFO - Changing password for user root
Current password:
Enter new password:
Enter new password again:  
2023-10-12T21:34:06Z INFO - Changing password for user occneuser
Current password:
Enter new password:
Enter new password again:
2023-10-12T21:34:06Z INFO - Changing password for user occnerepluser
Current password:
Enter new password:
Enter new password again:  2023-10-12T21:34:06Z INFO - Getting sts and sts pod info...
2023-10-12T21:34:06Z INFO - Getting MGM sts and sts pod info...
2023-10-12T21:34:07Z INFO - MGM_STS="ndbmgmd"
2023-10-12T21:34:07Z INFO - MGM_REPLICAS="2"
2023-10-12T21:34:07Z INFO - MGM_PODS:
    ndbmgmd-0
    ndbmgmd-1
2023-10-12T21:34:07Z INFO - Getting NDB sts and sts pod info...
2023-10-12T21:34:07Z INFO - NDB_STS="ndbmtd"
2023-10-12T21:34:07Z INFO - NDB_REPLICAS="2"
2023-10-12T21:34:07Z INFO - NDB_PODS:
    ndbmtd-0
    ndbmtd-1
2023-10-12T21:34:07Z INFO - Getting API sts and sts pod info...
2023-10-12T21:34:07Z INFO - API_STS="ndbmysqld"
2023-10-12T21:34:07Z INFO - API_REPLICAS="6"
2023-10-12T21:34:07Z INFO - API_PODS:
    ndbmysqld-0
    ndbmysqld-1
    ndbmysqld-2
    ndbmysqld-3
    ndbmysqld-4
    ndbmysqld-5
2023-10-12T21:34:07Z INFO - Getting APP sts and sts pod info...
2023-10-12T21:34:07Z INFO - APP_STS="ndbappmysqld"
2023-10-12T21:34:07Z INFO - APP_REPLICAS="2"
2023-10-12T21:34:07Z INFO - APP_PODS:
    ndbappmysqld-0
    ndbappmysqld-1
2023-10-12T21:34:07Z INFO - Getting deployment pod info...
2023-10-12T21:34:07Z INFO - grepping for backup-man (BAK_CHART_NAME)...
2023-10-12T21:34:07Z INFO - BAK_PODS:
    mysql-cluster-db-backup-manager-svc-78fdcdfd98-gpml2
2023-10-12T21:34:07Z INFO - BAK_DEPLOY:
    mysql-cluster-db-backup-manager-svc
2023-10-12T21:34:07Z INFO - grepping for db-mon (MON_CHART_NAME)...
2023-10-12T21:34:07Z INFO - MON_PODS:
    mysql-cluster-db-monitor-svc-ccc9bfbfd-5z45b
2023-10-12T21:34:07Z INFO - MON_DEPLOY:
    mysql-cluster-db-monitor-svc
2023-10-12T21:34:07Z INFO - grepping for repl (REP_CHART_NAME)...
2023-10-12T21:34:08Z INFO - REP_PODS:
    mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-67d96bg9z5m
    mysql-cluster-lfg-site-1-lfg-site-3-replication-svc-8f77b9xrrqt
    mysql-cluster-lfg-site-1-lfg-site-4-replication-svc-ddc647dt78q
2023-10-12T21:34:08Z INFO - REP_DEPLOY:
    mysql-cluster-lfg-site-1-lfg-site-2-replication-svc
    mysql-cluster-lfg-site-1-lfg-site-3-replication-svc
    mysql-cluster-lfg-site-1-lfg-site-4-replication-svc
2023-10-12T21:34:08Z INFO - Labeling pods with dbtier-app...
pod/ndbmgmd-0 labeled
pod/ndbmgmd-1 labeled
pod/ndbmtd-0 labeled
pod/ndbmtd-1 labeled
pod/ndbappmysqld-0 labeled
pod/ndbappmysqld-1 labeled
pod/ndbmysqld-0 labeled
pod/ndbmysqld-1 labeled
pod/ndbmysqld-2 labeled
pod/ndbmysqld-3 labeled
pod/ndbmysqld-4 labeled
pod/ndbmysqld-5 labeled
pod/mysql-cluster-db-backup-manager-svc-78fdcdfd98-gpml2 labeled
pod/mysql-cluster-db-monitor-svc-ccc9bfbfd-5z45b labeled
pod/mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-67d96bg9z5m labeled
pod/mysql-cluster-lfg-site-1-lfg-site-3-replication-svc-8f77b9xrrqt labeled
pod/mysql-cluster-lfg-site-1-lfg-site-4-replication-svc-ddc647dt78q labeled
2023-10-12T21:34:41Z INFO - Pods labeled with dbtier-app
2023-10-12T21:34:42Z INFO - DBTIER_REPL_SITE_INFO table is empty (num_of_recs=0); indicating SINGLE SITE SETUP...
2023-10-12T21:34:42Z INFO - Changing mysql password for 'root'@'localhost'...
2023-10-12T21:34:43Z INFO - Mysql password changed
2023-10-12T21:34:43Z INFO - Patching secret, occne-mysqlndb-root-secret, with new password
secret/occne-mysqlndb-root-secret patched
2023-10-12T21:34:43Z INFO - Secret, occne-mysqlndb-root-secret, patched with new password
2023-10-12T21:34:43Z INFO - Adding new mysql password for 'occneuser'@'%'...
2023-10-12T21:34:43Z INFO - New mysql password added
2023-10-12T21:34:43Z INFO - Patching secret, occne-secret-db-monitor-secret, with new password
secret/occne-secret-db-monitor-secret patched
2023-10-12T21:34:44Z INFO - Secret, occne-secret-db-monitor-secret, patched with new password
2023-10-12T21:34:44Z INFO - Adding new mysql password for 'occnerepluser'@'%'...
2023-10-12T21:34:44Z INFO - New mysql password added
2023-10-12T21:34:44Z INFO - Patching secret, occne-replication-secret-db-replication-secret, with new password
secret/occne-replication-secret-db-replication-secret patched
2023-10-12T21:34:44Z INFO - Secret, occne-replication-secret-db-replication-secret, patched with new password
2023-10-12T21:34:44Z INFO - Starting rollover restarts at 2023-10-12T21:34:44Z
2023-10-12T21:34:44Z INFO - number of db-replication-svc deployments: 3
2023-10-12T21:34:44Z INFO - Patching deployment 0: mysql-cluster-lfg-site-1-lfg-site-2-replication-svc...
2023-10-12T21:34:47Z INFO - Patching deployment 1: mysql-cluster-lfg-site-1-lfg-site-3-replication-svc...
2023-10-12T21:34:47Z INFO - Patching deployment 2: mysql-cluster-lfg-site-1-lfg-site-4-replication-svc...
2023-10-12T21:34:48Z INFO - Waiting for deployment mysql-cluster-lfg-site-1-lfg-site-2-replication-svc to rollout restart...
Waiting for deployment "mysql-cluster-lfg-site-1-lfg-site-2-replication-svc" rollout to finish: 0 out of 1 new replicas have been updated...
Waiting for deployment "mysql-cluster-lfg-site-1-lfg-site-2-replication-svc" rollout to finish: 0 out of 1 new replicas have been updated...
Waiting for deployment "mysql-cluster-lfg-site-1-lfg-site-2-replication-svc" rollout to finish: 0 out of 1 new replicas have been updated...
Waiting for deployment "mysql-cluster-lfg-site-1-lfg-site-2-replication-svc" rollout to finish: 0 of 1 updated replicas are available...
deployment "mysql-cluster-lfg-site-1-lfg-site-2-replication-svc" successfully rolled out
2023-10-12T21:36:19Z INFO - Waiting for deployment mysql-cluster-lfg-site-1-lfg-site-3-replication-svc to rollout restart...
deployment "mysql-cluster-lfg-site-1-lfg-site-3-replication-svc" successfully rolled out
2023-10-12T21:36:19Z INFO - Waiting for deployment mysql-cluster-lfg-site-1-lfg-site-4-replication-svc to rollout restart...
deployment "mysql-cluster-lfg-site-1-lfg-site-4-replication-svc" successfully rolled out
2023-10-12T21:36:22Z INFO - number of db-backup-manager-svc deployments: 1
2023-10-12T21:36:22Z INFO - Patching deployment 0: mysql-cluster-db-backup-manager-svc...
2023-10-12T21:36:23Z INFO - number of db-monitor-svc deployments: 1
2023-10-12T21:36:23Z INFO - Patching deployment 0: mysql-cluster-db-monitor-svc...
2023-10-12T21:36:23Z INFO - Waiting for statefulset ndbmtd to rollout restart...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 1 pods at revision ndbmtd-d7bf774f6...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 2 pods at revision ndbmtd-d7bf774f6...
2023-10-12T21:37:49Z INFO - Waiting for statefulset ndbappmysqld to rollout restart...
statefulset rolling update complete 2 pods at revision ndbappmysqld-5b8948d47d...
2023-10-12T21:37:49Z INFO - Waiting for statefulset ndbmysqld to rollout restart...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 4 pods at revision ndbmysqld-94dd7fcbb...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 5 pods at revision ndbmysqld-94dd7fcbb...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 6 pods at revision ndbmysqld-94dd7fcbb...
2023-10-12T21:38:32Z INFO - Waiting for deployment mysql-cluster-db-backup-manager-svc to rollout restart...
deployment "mysql-cluster-db-backup-manager-svc" successfully rolled out
2023-10-12T21:38:33Z INFO - Waiting for deployment mysql-cluster-db-monitor-svc to rollout restart...
deployment "mysql-cluster-db-monitor-svc" successfully rolled out
2023-10-12T21:38:33Z INFO - Discarding old mysql password for 'occneuser'@'%'...
2023-10-12T21:38:33Z INFO - Old mysql password discarded
2023-10-12T21:38:33Z INFO - Discarding old mysql password for 'occnerepluser'@'%'...
2023-10-12T21:38:34Z INFO - Old mysql password discarded
2023-10-12T21:38:34Z INFO - Password(s) updated successfully
7.8.2.6 Changing All cnDBTier Passwords in Phases

This section provides the procedure to change all cnDBTier Passwords in Phases.

  1. Run the following command to add the new passwords to MySQL (retain the old passwords) and change the passwords in the Kubernetes secret:
    $ dbtpasswd --secrets-and-mysql-only
    Sample output:
    2022-12-24T03:41:08Z INFO - Changing password for user root
    Current password:
    Enter new password:
    Enter new password again:
    2022-12-24T03:41:08Z INFO - Changing password for user occneuser
    Current password:
    Enter new password:
    Enter new password again:
    2022-12-24T03:41:09Z INFO - Changing password for user occnerepluser
    Current password:
    Enter new password:
    Enter new password again:
    2022-12-24T03:41:09Z INFO - Getting sts and sts pod info...
    2022-12-24T03:41:09Z INFO - MGM_STS="ndbmgmd"
    2022-12-24T03:41:09Z INFO - MGM_REPLICAS="2"
    2022-12-24T03:41:09Z INFO -
        ndbmgmd-0
        ndbmgmd-1
    2022-12-24T03:41:09Z INFO - NDB_STS="ndbmtd"
    2022-12-24T03:41:09Z INFO - NDB_REPLICAS="2"
    2022-12-24T03:41:09Z INFO -
        ndbmtd-0
        ndbmtd-1
    2022-12-24T03:41:09Z INFO - API_STS="ndbmysqld"
    2022-12-24T03:41:09Z INFO - API_REPLICAS="2"
    2022-12-24T03:41:09Z INFO -
        ndbmysqld-0
        ndbmysqld-1
    2022-12-24T03:41:09Z INFO - APP_STS="ndbappmysqld"
    2022-12-24T03:41:09Z INFO - APP_REPLICAS="2"
    2022-12-24T03:41:10Z INFO -
        ndbappmysqld-0
        ndbappmysqld-1
    2022-12-24T03:41:10Z INFO - Getting deployment pod info...
    2022-12-24T03:41:10Z INFO - grepping for backup-man (BAK_CHART_NAME)...
    2022-12-24T03:41:10Z INFO -
        mysql-cluster-db-backup-manager-svc-c4648f6bc-jpkt9
    2022-12-24T03:41:10Z INFO -
        mysql-cluster-db-backup-manager-svc
    2022-12-24T03:41:10Z INFO - grepping for db-mon (MON_CHART_NAME)...
    2022-12-24T03:41:10Z INFO -
        mysql-cluster-db-monitor-svc-7d684c7c6f-gvv76
    2022-12-24T03:41:10Z INFO -
        mysql-cluster-db-monitor-svc
    2022-12-24T03:41:10Z INFO - grepping for replicat (REP_CHART_NAME)...
    2022-12-24T03:41:10Z INFO -
        mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-7b689frvfr2
    2022-12-24T03:41:10Z INFO -
        mysql-cluster-lfg-site-1-lfg-site-2-replication-svc
    2022-12-24T03:41:10Z INFO - Labeling pods with dbtier-app...
    pod/ndbmgmd-0 not labeled
    pod/ndbmgmd-1 not labeled
    pod/ndbmtd-0 not labeled
    pod/ndbmtd-1 not labeled
    pod/ndbappmysqld-0 not labeled
    pod/ndbappmysqld-1 not labeled
    pod/ndbmysqld-0 not labeled
    pod/ndbmysqld-1 not labeled
    pod/mysql-cluster-db-backup-manager-svc-c4648f6bc-jpkt9 not labeled
    pod/mysql-cluster-db-monitor-svc-7d684c7c6f-gvv76 not labeled
    pod/mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-7b689frvfr2 not labeled
    2022-12-24T03:41:11Z INFO - Pods labeled with dbtier-app
    2022-12-24T03:41:11Z INFO - Verifying Geo Replication to mates...
    2022-12-24T03:41:14Z INFO - Geo Replication to mates is UP
    2022-12-24T03:41:14Z INFO - Changing mysql password for 'root'@'localhost'...
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-24T03:41:15Z INFO - Mysql password changed
    2022-12-24T03:41:15Z INFO - Patching secret, occne-mysqlndb-root-secret, with new password
    secret/occne-mysqlndb-root-secret patched
    2022-12-24T03:41:15Z INFO - Secret, occne-mysqlndb-root-secret, patched with new password
    2022-12-24T03:41:15Z INFO - Adding new mysql password for 'occneuser'@'%'...
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-24T03:41:15Z INFO - New mysql password added
    2022-12-24T03:41:16Z INFO - Patching secret, occne-secret-db-monitor-secret, with new password
    secret/occne-secret-db-monitor-secret patched
    2022-12-24T03:41:16Z INFO - Secret, occne-secret-db-monitor-secret, patched with new password
    2022-12-24T03:41:16Z INFO - Adding new mysql password for 'occnerepluser'@'%'...
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-24T03:41:16Z INFO - New mysql password added
    2022-12-24T03:41:16Z INFO - Changing password in replication_info.DBTIER_REPL_SITE_INFO table...
    2022-12-24T03:41:16Z INFO - Using replication pod: mysql-cluster-lfg-site-1-lfg-site-2-replication-svc-7b689frvfr2
    2022-12-24T03:41:16Z INFO - Using replication pod container: lfg-site-1-lfg-site-2-replication-svc
    2022-12-24T03:41:16Z INFO - MYSQL_REPLICATION_SITE_NAME = lfg-site-1
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-24T03:41:17Z INFO - Password changed in replication_info.DBTIER_REPL_SITE_INFO table
    2022-12-24T03:41:17Z INFO - Patching secret, occne-replication-secret-db-replication-secret, with new password
    secret/occne-replication-secret-db-replication-secret patched
    2022-12-24T03:41:17Z INFO - Secret, occne-replication-secret-db-replication-secret, patched with new password
    2022-12-24T03:41:17Z INFO - Password(s) updated successfully
  2. Run the following command to restart the appropriate cnDBTier pods:
    $ dbtpasswd --restart-only
    Sample output:
    2022-12-24T03:58:36Z INFO - Changing password for user root
    Current password:
    2022-12-24T03:58:41Z INFO - Changing password for user occneuser
    Current password:
    2022-12-24T03:58:46Z INFO - Changing password for user occnerepluser
    Current password:
    2022-12-24T03:58:49Z INFO - Getting sts and sts pod info...
    ...
    2022-12-24T04:01:39Z INFO - db-monitor-svc pods rollout restarted
    2022-12-24T04:01:39Z INFO - Password(s) updated successfully
  3. Once the necessary transition is done, run the following command to discard the old MySQL cnDBTier passwords:
    $ dbtpasswd --discard-only
    Sample output:
    2022-12-24T03:51:40Z INFO - Changing password for user root
    Current password:
    2022-12-24T03:52:04Z INFO - Changing password for user occneuser
    Current password:
    2022-12-24T03:52:07Z INFO - Changing password for user occnerepluser
    Current password:
    2022-12-24T03:52:09Z INFO - Getting sts and sts pod info...
    ...
    2022-12-24T03:52:12Z INFO - Discarding old mysql password for 'occneuser'@'%'...
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-24T03:52:12Z INFO - Old mysql password discarded
    2022-12-24T03:52:12Z INFO - Discarding old mysql password for 'occnerepluser'@'%'...
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-24T03:52:13Z INFO - Old mysql password discarded
    2022-12-24T03:52:13Z INFO - Password(s) updated successfully
7.8.2.7 Changing an NF Password
This section provides a sample procedure to change an NF password when the secret is stored in an NF namespace that is different from the cnDBTier namespace.
  1. Run the following commands to change the password on the secret and add a new password to MySQL.

    Note:

    When the output prompts for the current password, enter the current password in the NF secret.
    $ export DBTIER_NAMESPACE=”dbtier_namespace”
    $ dbtpasswd --secrets-and-mysql-only --nf-namespace=name-of-nf-namespace nf-secret-in-nf-namespace
    Sample output:
    2022-12-15T23:27:19Z INFO - Changing password for user luis
    Current password:
    Enter new password:
    Enter new password again:
    ...
    2022-12-15T23:27:37Z INFO - Adding new mysql password for 'luis'@'%'...
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-15T23:27:37Z INFO - New mysql password added
    2022-12-15T23:27:37Z INFO - Patching secret, nf-secret-in-nf-namespace, with new password
    secret/nf-secret-in-nf-namespace patched
    2022-12-15T23:27:37Z INFO - Secret, nf-secret-in-nf-namespace, patched with new password
    2022-12-15T23:27:37Z INFO - Password(s) updated successfully
  2. Run the following command to discard the old password in MySQL, after restarting NF pods, adding new NF passwords on mate sites, or both.

    Note:

    • When the output prompts for the current password, enter the current password in the NF secret.
    • The NF secret must be present on nf-namespace and the corresponding MySQL user must be present with the corresponding password.
    $ dbtpasswd --discard-only --nf-namespace=name-of-nf-namespace nf-secret-in-nf-namespace
    Sample output:
    2022-12-15T23:28:48Z INFO - Changing password for user luis
    Current password:
    ...
    2022-12-15T23:28:53Z INFO - Discarding old mysql password for 'luis'@'%'...
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2022-12-15T23:28:54Z INFO - Old mysql password discarded
    2022-12-15T23:28:54Z INFO - Password(s) updated successfully

7.8.3 Modifying cnDBTier Backup Encryption Password

This section provides the procedure to modify the cnDBTier backup encryption password.

  1. Get the existing backup encryption password (occne-backup-encryption-secret):
    $ kubectl get secret  occne-backup-encryption-secret  -n <cndbtier_namespace>  -o jsonpath="{.data.backup_encryption_password}" | base64 --decode
    
    For example,
    $ kubectl get secret  occne-backup-encryption-secret  -n occne-cndbtier -o jsonpath="{.data.backup_encryption_password}" | base64 --decode

    Note:

    Skip this step if you already know the existing occne-backup-encryption-secret password.
  2. Delete the existing backup encryption secret:
    $ kubectl -n <cndbtier_namespace> delete secret occne-backup-encryption-secret
    For example,
    $ kubectl -n occne-cndbtier delete secret occne-backup-encryption-secret
  3. Run the following command to create secret with a new password. For information about the password policies, see the "Creating Secrets" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    $ kubectl -n <cndbtier_namespace>  create secret generic occne-backup-encryption-secret --from-literal="backup_encryption_password=<new_password>"
    For example,
    $ kubectl -n  occne-cndbtier create secret generic occne-backup-encryption-secret --from-literal="backup_encryption_password=NextGenCne"
  4. Scale down the replication service and backup manager service deployments. After scaling down, wait until all the replication service and backup manager pods are DOWN:
    $ kubectl -n <cndbtier_namespace> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <cndbtier_namespace> scale deployment --replicas=0
    $ kubectl -n <cndbtier_namespace> get deployments | egrep 'db-backup-manager-svc' | awk '{print $1}' | xargs -L1 -r kubectl -n <cndbtier_namespace> scale deployment --replicas=0
    For example,
    $ kubectl -n occne-cndbtier get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n occne-cndbtier scale deployment --replicas=0
    $ kubectl -n occne-cndbtier get deployments | egrep 'db-backup-manager-svc' | awk '{print $1}' | xargs -L1 -r kubectl -n occne-cndbtier scale deployment --replicas=0
  5. Scale up the replication service and backup manager service deployments. After scaling up, wait until all the replication service and backup manager pods are UP:
    $ kubectl -n <cndbtier_namespace> get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n <cndbtier_namespace> scale deployment --replicas=1
    $ kubectl -n <cndbtier_namespace> get deployments | egrep 'db-backup-manager-svc' | awk '{print $1}' | xargs -L1 -r kubectl -n <cndbtier_namespace> scale deployment --replicas=1
    For example,
    $ kubectl -n occne-cndbtier get deployments | egrep 'repl' | awk '{print $1}' | xargs -L1 -r kubectl -n occne-cndbtier scale deployment --replicas=1
    $ kubectl -n occne-cndbtier get deployments | egrep 'db-backup-manager-svc' | awk '{print $1}' | xargs -L1 -r kubectl -n occne-cndbtier scale deployment --replicas=1

7.8.4 Modifying SSH Keys for Transferring Backups

This section provides the procedure to modify Secure Shell (SSH) keys for securely transferring cnDBTier backups.

  1. Perform the following steps to move the existing SSH keys to the backup directory and delete the existing SSH secrets:
    1. Perform the following steps to move the existing SSH keys to the backup directory:
      1. Identify the location of the existing SSH keys:
        $ ls /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/

        For example:

        $ ls /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/

        Sample output:

        cndbtier_id_rsa cndbtier_id_rsa.pub

      2. Move the existing SSH keys from the location identified in the previous step to the backup directory:
        $ sshBackupDateTime=$(date +"%m-%d-%y-%H-%M-%S")
        $ mkdir -p /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/${sshBackupDateTime}
        $ mv <SSH private key file name> /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/${sshBackupDateTime}/
        $ mv <SSH public key file name> /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/${sshBackupDateTime}/ 
        For example:
        $ sshBackupDateTime=$(date +"%m-%d-%y-%H-%M-%S")
        $ mkdir -p /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/${sshBackupDateTime}
        $ mv /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/${sshBackupDateTime}/
        $ mv /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa.pub /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/${sshBackupDateTime}/    
    2. Perform the following steps to delete the existing SSH or Secure File Transfer Protocol (SFTP) secrets from the current cnDBTier cluster:
      1. Identify the SSH or SFTP secrets from the current cluster by running the following command:
        $ kubectl get secrets --namespace=<namespace of cnDBTier Cluster> | grep ssh
        For example:
        $ kubectl get secrets --namespace=cluster1 | grep ssh 
        Sample output:
        cndbtier-ssh-private-key                         Opaque               1      7d
        cndbtier-ssh-public-key                          Opaque               1      7d
      2. Delete the SSH or SFTP secrets from the current cluster by running the following commands:
        $ kubectl delete secret cndbtier-ssh-private-key --namespace=<namespace of cnDBTier Cluster>
        $ kubectl delete secret cndbtier-ssh-public-key --namespace=<namespace of cnDBTier Cluster>
        Example to delete a private key:
        $ kubectl delete secret cndbtier-ssh-private-key --namespace=cluster1
        Sample output:
        secret "cndbtier-ssh-private-key" deleted
        Example to delete a public key:
        $ kubectl delete secret cndbtier-ssh-public-key --namespace=cluster1
        Sample output:
        secret "cndbtier-ssh-public-key" deleted
  2. Create the SSH keys by running the following commands:
    $ mkdir -p -m 0700 /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh
    $ ssh-keygen -b 4096 -t rsa -C "cndbtier key" -f "/var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa" -q -N ""

    For more information about creating SSH secrets, see "Creating SSH Keys" in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

  3. Create the SFTP secrets by running the following commands:
    $ kubectl create secret generic cndbtier-ssh-private-key --from-file=id_rsa=/var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa -n ${OCCNE_NAMESPACE}
    $ kubectl create secret generic cndbtier-ssh-public-key --from-file=id_rsa.pub=/var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa.pub -n ${OCCNE_NAMESPACE}

    For more information about creating SFTP secrets, see step 2 of "Creating Secrets" in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

  4. Perform the following steps to restart the data nodes sequentially in an ascending order (ndbmtd-0, ndbmtd-1, and so on):
    1. Identify the list of data pods in the cnDBTier cluster:
      $ kubectl get pods --namespace=<namespace of cnDBTier Cluster> | grep 'ndbmtd'
      For example, use the following command to identify the list of data pods in cnDBTier cluster1:
      $ kubectl get pods --namespace=cluster1 | grep 'ndbmtd'
      Sample output:
      ndbmtd-0                                               3/3     Running   0          14m
      ndbmtd-1                                               3/3     Running   0          13m
    2. Delete the first data pod of the cnDBTier cluster:
      $ kubectl delete pod ndbmtd-0 --namespace=<namespace of cnDBTier Cluster>
      For example, use the following command to delete the first data pod of cnDBTier cluster1:
      $ kubectl delete pod ndbmtd-0 --namespace=cluster1
      Sample output:
      pod "ndbmtd-0" deleted
    3. Wait for the first data pod to come up and verify if the pod is up by running the following command:
      $ kubectl get pods --namespace=<namespace of cnDBTier Cluster> | grep 'ndbmtd'
      For example:
      $ kubectl get pods --namespace=cluster1 | grep 'ndbmtd'
      Sample output:
      ndbmtd-0                                               3/3     Running   0          65s
      ndbmtd-1                                               3/3     Running   0          13m
    4. Check the status of the cnDBTier cluster:
      $ kubectl -n <namespace of cnDBTier Cluster> exec -it ndbmgmd-0 -- ndb_mgm -e show
      For example, use the following command to check the status of cnDBTier cluster1:
      $ kubectl -n cluster1 exec -it ndbmgmd-0 -- ndb_mgm -e show
      Sample output:
      Connected to Management Server at: localhost:1186 Cluster Configuration
      ---------------------
      [ndbd(NDB)]     2 node(s)
      id=1    @10.233.85.92  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0)
      id=2    @10.233.114.33  (mysql-8.4.3 ndb-8.4.3, Nodegroup: 0, *)
         
      [ndb_mgmd(MGM)] 2 node(s)
      id=49   @10.233.65.167  (mysql-8.4.3 ndb-8.4.3)
      id=50   @10.233.127.115  (mysql-8.4.3 ndb-8.4.3)
         
      [mysqld(API)]   8 node(s)
      id=56   @10.233.120.210  (mysql-8.4.3 ndb-8.4.3)  
      id=57   @10.233.124.93  (mysql-8.4.3 ndb-8.4.3)
      id=70   @10.233.127.117  (mysql-8.4.3 ndb-8.4.3)
      id=71   @10.233.85.93  (mysql-8.4.3 ndb-8.4.3)
      id=222 (not connected, accepting connect from any host)
      id=223 (not connected, accepting connect from any host)
      id=224 (not connected, accepting connect from any host)
      id=225 (not connected, accepting connect from any host) 

      Note:

      Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery.
    5. Repeat steps a through d to delete the remaining data pods (ndbmtd-1, ndbmtd-2, and so on).
  5. Perform the following steps to restart all the replication services of the current cluster:
    1. Identify the replication service running in the current cluster:
      $ kubectl get pods --namespace=<namespace of cnDBTier Cluster>
      For example:
      $ kubectl get pods --namespace=cluster1
      Sample output:
      NAME                                                              READY   STATUS    RESTARTS        AGE
      mysql-cluster-cluster1-cluster2-replication-svc-5d4b8fd685tshzd   1/1     Running   1 (13h ago)     13h
    2. Delete all the replication service pods of the current cluster:
      $ kubectl delete pod <replication service pod name> --namespace=<namespace of cnDBTier Cluster>
      For example,
      $ kubectl delete pod mysql-cluster-cluster1-cluster2-replication-svc-5d4b8fd685tshzd --namespace=cluster1
      Sample output:
      pod "mysql-cluster-cluster1-cluster2-replication-svc-5d4b8fd685tshzd" delete
  6. Follow steps 1 through 5 to replace the SSH or SFTP keys and secrets on the other georeplication cnDBTier clusters.

    Note:

    The SSH or SFTP secrets must be the same across all the georeplication cnDBTier clusters.

7.8.5 Modifying Transparent Data Encryption Password

This section provides the procedure to modify the Transparent Data Encryption (TDE) password.

  1. Get the existing TDE password (occne-tde-encrypted-filesystem-secret):
    $ kubectl -n <cndbtier_namespace> get secret  occne-tde-encrypted-filesystem-secret  -o jsonpath="{.data.filesystem-password}" | base64 --decode
    
    For example,
    $ kubectl -n occne-cndbtier get secret  occne-tde-encrypted-filesystem-secret  -o jsonpath="{.data.filesystem-password}" | base64 --decode
    
  2. Delete the existing TDE secret:
    $ kubectl -n <cndbtier_namespace> delete secret occne-tde-encrypted-filesystem-secret
    For example,
    $ kubectl -n occne-cndbtier delete secret occne-tde-encrypted-filesystem-secret
  3. Run the following command to create secret with a new password. For information about the password policies, see the "Creating Secrets" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
    $ kubectl -n <cndbtier_namespace> create secret generic occne-tde-encrypted-filesystem-secret --from-literal="filesystem-password=<new_tde-encryption-password>"
    
    For example,
    $ kubectl -n occne-cndbtier create secret generic occne-tde-encrypted-filesystem-secret --from-literal="filesystem-password=NextGenCne"
    
  4. Perform a cnDBTier upgrade by following the procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

7.9 Modifying HTTPS Certificates

This section provides the procedure to modify HTTPS certificates.

  1. Create a new certificate by following the sample procedure provided in the "Creating HTTPS or TLS Certificates for Encrypted Connection" section of Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  2. Run the following commands to take a backup of the existing secrets (keystore and credentials) of all the sites where you want to modify the certificates:
    $ kubectl -n <cndbtier_namespace> get secret cndbtier-https-cert-file -o yaml > cndbtier-https-cert-file-backup.yaml
    $ kubectl -n <cndbtier_namespace> get secret cndbtier-https-cert-cred -o jsonpath="{.data.keystorepassword}" | base64 --decode
    $ kubectl -n <cndbtier_namespace> get secret cndbtier-https-cert-cred -o jsonpath="{.data.keystoretype}" | base64 --decode
    $ kubectl -n <cndbtier_namespace> get secret cndbtier-https-cert-cred -o jsonpath="{.data.keyalias}" | base64 --decode
    $ kubectl -n <cndbtier_namespace> get secret cndbtier-https-cert-cred -o jsonpath="{.data.clientkeyalias}" | base64 --decode
    where, <cndbtier_namespace> is the name of the cnDBTier namespace.
    For example:
    $ kubectl -n cluster1 get secret cndbtier-https-cert-file -o yaml > cndbtier-https-cert-file-backup.yaml
    $ kubectl -n cluster1 get secret cndbtier-https-cert-cred -o jsonpath="{.data.keystorepassword}" | base64 --decode
    $ kubectl -n cluster1 get secret cndbtier-https-cert-cred -o jsonpath="{.data.keystoretype}" | base64 --decode
    $ kubectl -n cluster1 get secret cndbtier-https-cert-cred -o jsonpath="{.data.keyalias}" | base64 --decode
    $ kubectl -n cluster1 get secret cndbtier-https-cert-cred -o jsonpath="{.data.clientkeyalias}" | base64 --decode
  3. Delete the old HTTPS secrets of all the sites where you want to modify the certificates:
    $ kubectl get secrets -n <cndbtier_namespace>
    $ kubectl -n <cndbtier_namespace> delete secrets <cndbtier-https-cert-cred> <cndbtier-https-cert-file>
    For example:
    $ export OCCNE_NAMESPACE= "cluster1"
    $ kubectl get secrets -n ${OCCNE_NAMESPACE}
    $ kubectl delete secrets cndbtier-https-cert-cred cndbtier-https-cert-file -n ${OCCNE_NAMESPACE}
  4. Create the new HTTPS secret using the new server certificate created in Step 1, on all the sites where you want to modify the certificates. For more information about creating HTTPs secrets, see the "Creating Secrets" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    Note:

    • You can update server-cert.pem and client-cert.pem when required.
    • You must use the maintenance window to update ca-cert.pem. After creating new secrets with new ca-cert.pem, restart the replication service and monitor the service of each site where the certificates are modified.

7.10 Modifying Remote Server Configurations for Secure Transfer of Backups

cnDBTier requires the following remote server configurations to securely transfer backups:
  • IP address or Fully Qualified Domain Name (FQDN)
  • Secure File Transfer Protocol (SFTP) port
  • The path where the backups are to be securely transferred
  • SSH key and username that are configured in the form of secrets
This section provides the procedure to modify remote server configurations in cnDBTier.
  1. Perform the following steps to update the remote server configurations such as remoteserverip, remoteserverport, and remoteserverpath:
    1. Locate the custom_values.yaml file.
    2. Edit the configurations as per your requirement.

      Note:

      The faultrecoverybackuptransfer parameter is set to false by default in the custom_values.yaml file. Therefore, even if the /global/remotetransfer/enable parameter is set to true, the system will not transfer the backups taken during fault recovery to the remote server. If you want the backups taken for fault recovery to be transferred to the remote server, set the faultrecoverybackuptransfer parameter to true.
      The following code block provides a template to configure the remote transfer parameters:
      global:
      ....
      ....
        remotetransfer:
          enable: true
          faultrecoverybackuptransfer: true
          remoteserverip: "< IP address of the remote server>"
          remoteserverport: "<SFTP Port of the remote server>"
          remoteserverpath: "<path where cndbtier backups will be stored>"
       
      For example:
      global:
      ....
      ....   
        remotetransfer:
           enable: true
           faultrecoverybackuptransfer: true
           remoteserverip: "10.75.216.8"
           remoteserverport: "2022"
           remoteserverpath: "/var/occnedb" 
  2. Perform the following steps to change the username of the remote server:
    1. Delete the occne-remote-server-username-secret secret by running the following command:
      $ kubectl -n <namespace> delete secret occne-remote-server-username-secret

      Where, <namespace> is the namespace of cnDBTier.

      For example,
      $  kubectl  -n occne-cndbtier  delete secret occne-remote-server-username-secret
    2. Recreate the secret with a new username by running the following command:
      $  kubectl -n <namespace> create secret generic occne-remote-server-username-secret --from-literal="remote_server_user_name=<user_name>"
      Where,
      • <namespace> is the namespace of cnDBTier.
      • <user_name> is the new username of the remote server.
      For example,
      $  kubectl  -n occne-cndbtier  create secret generic occne-remote-server-username-secret --from-literal="remote_server_user_name=cndbtierbackupserver"

      As per the example, the username of the remote server is cndbtierbackupserver.

  3. Perform the following steps to update the SSH key of the remote server:
    1. Delete the occne-remoteserver-privatekey-secret secret by running the following command:
      $ kubectl -n <namespace> delete secret occne-remoteserver-privatekey-secret

      Where, <namespace> is the namespace of cnDBTier.

      For example,
      $ kubectl -n occne-cndbtier delete secret occne-remoteserver-privatekey-secret
      
    2. Copy the private SSH key of the remote server and save it in a file.
    3. Create the secret by using the file created in the previous step:
      $ kubectl -n <namespace> create secret generic occne-remoteserver-privatekey-secret --from-file=id_rsa=<private key path>
      Where,
      • <namespace> is the namespace of cnDBTier
      • <private key path> is the path to the private key file created in step b. The file path must include the file name.
      For example,
      $ kubectl -n occne-cndbtier create secret generic occne-remoteserver-privatekey-secret --from-file=id_rsa=/var/occne/cluster/dbtier/remoteserver_id_rsa

      As per the example, the private key of the remote server is saved in the /var/occne/cluster/dbtier/ directory with the file name remoteserver_id_rsa.

  4. Upgrade the cnDBTier cluster to update all the configurations changed in steps 1, 2, and 3:
    $  helm  upgrade mysql-cluster occndbtier -f occndbtier/custom_values.yaml -n <namespace>

    Where, <namespace> is the namespace of cnDBTier.

  5. If the replication service pods are not restarted after helm upgrade in the previous step, run the following command to manually delete the replication service pods:
    $ kubectl -n <cndbtier_namespace> delete pod <replication_pod_name>

7.11 Checking the Georeplication Status Between Clusters

This section describes the following procedures to check the georeplication status of a cnDBTier cluster in a site with a remote site:
  1. Checking the Georeplication Status of cnDBTier cluster1 with Remote Sites
  2. Checking the Georeplication Status of cnDBTier cluster2 with Remote Sites
  3. Checking the Georeplication Status of cnDBTier cluster3 with Remote Sites
  4. Checking the Georeplication Status of cnDBTier cluster4 with Remote Sites

Note:

Replace the name of the cnDBTier cluster namespaces(cluster1, cluster2, cluster3, and cluster4) with the actual name of your namespaces in each of the commands in this section.

Checking the Georeplication Status of cnDBTier cluster1 with Remote Sites

Perform the following steps to check the georeplication status of cnDBTier cluster1 with remote sites (cnDBTier cluster2, cnDBTier cluster3, and cnDBTier cluster4):

Note:

The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check the georeplication status in cnDBTier cluster1 with remote sites (cnDBTier cluster2, cnDBTier cluster3, and cnDBTier cluster4) for every replication group.
  1. Run the following commands to check the georeplication status of cnDBTier cluster1 with respect to cnDBTier cluster2:
    $ kubectl -n cluster1 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (2000, 2001) AND remote_site_name = "cluster2";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster2         |             2000 |       8970 | 10.233.0.147        | ACTIVE  |        NULL | cluster1  |      1000 | 2022-04-04 19:10:44 |
    | cluster2         |             2001 |       4597 | 10.233.9.35         | STANDBY |        NULL | cluster1  |      1001 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec) 
  2. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster1 with respect to cnDBTier cluster2.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE replication channel:
    $ kubectl -n cluster1 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.0.147
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000007
              Read_Source_Log_Pos: 16442
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  3. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster1 with respect to cnDBTier cluster2.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY replication channel:
    $ kubectl -n cluster1 exec -it ndbmysqld-1 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.9.35
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 26970
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  4. Run the following commands to check the georeplication status of cnDBTier cluster1 with respect to cnDBTier cluster3:
    $ kubectl -n cluster1 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (3000, 3001) AND remote_site_name = "cluster3";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster3         |             3000 |       5533 | 10.233.16.34        | ACTIVE  |        NULL | cluster1  |      1002 | 2022-04-04 19:10:44 |
    | cluster3         |             3001 |       7918 | 10.233.41.15        | STANDBY |        NULL | cluster1  |      1003 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)  
  5. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster1 with respect to cnDBTier cluster3.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE replication channel:
    $ kubectl -n cluster1 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.16.34
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000007
              Read_Source_Log_Pos: 18542
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  6. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster1 with respect to cnDBTier cluster3.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY replication channel:
    $ kubectl -n cluster1 exec -it ndbmysqld-3 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                   Replica_IO_State:
                      Source_Host: 10.233.41.15
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000006
              Read_Source_Log_Pos: 26421
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  7. Run the following commands to check the georeplication status of cnDBTier cluster1 with respect to cnDBTier cluster4:
    $ kubectl -n cluster1 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (4000, 4001) AND remote_site_name = "cluster4";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster4         |             4000 |       5909 | 10.233.57.61        | ACTIVE  |        NULL | cluster1  |      1004 | 2022-04-04 19:10:44 |
    | cluster4         |             4001 |       1937 | 10.233.54.145       | STANDBY |        NULL | cluster1  |      1005 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)  
  8. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster1 with respect to cnDBTier cluster4.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes
    The following example describes the commands to check the replication status of the ACTIVE replication channel::
    $ kubectl -n cluster1 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                   Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.57.61
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000004
              Read_Source_Log_Pos: 185
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  9. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster1 with respect to cnDBTier cluster4.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY replication channel:
    $ kubectl -n cluster1 exec -it ndbmysqld-5 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.54.145
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 26420
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No

Checking the Georeplication Status of cnDBTier cluster2 with Remote Sites

Perform the following steps to check the georeplication status of cnDBTier cluster2 with remote sites (cnDBTier cluster1, cnDBTier cluster3, and cnDBTier cluster4):

Note:

The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check the georeplication status in cnDBTier cluster2 with remote sites (cnDBTier cluster1, cnDBTier cluster3, and cnDBTier cluster4) for every replication group.
  1. Run the following commands to check the georeplication status of cnDBtier cluster2 with respect to cnDBTier cluster1:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (1000, 1001) AND remote_site_name = "cluster1";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster1         |             1000 |       8970 | 10.233.13.41        | ACTIVE  |        NULL | cluster2  |      2000 | 2022-04-04 19:10:44 |
    | cluster1         |             1001 |       4597 | 10.233.39.110       | STANDBY |        NULL | cluster2  |      2001 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  2. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBtier cluster2 with respect to cnDBTier cluster1.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE replication channel:
    $ kubectl -n cluster2 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.13.41
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000009
              Read_Source_Log_Pos: 16480
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  3. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster2 with respect to cnDBTier cluster1.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY replication channel:
    $ kubectl -n cluster2 exec -it ndbmysqld-1 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:  
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.39.110
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000007
              Read_Source_Log_Pos: 13270
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  4. Run the following commands to check the georeplication status of cnDBTier cluster2 with respect to cnDBTier cluster3:
    $ kubectl -n cluster2 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (3002, 3003) AND remote_site_name = "cluster3";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster3         |             3002 |       9981 | 10.233.62.226       | ACTIVE  |        NULL | cluster2  |      2002 | 2022-04-04 19:10:44 |
    | cluster3         |             3003 |       9319 | 10.233.14.15        | STANDBY |        NULL | cluster2  |      2003 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec) 
  5. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster2 with respect to cnDBTier cluster3.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster2 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.62.226
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000008
              Read_Source_Log_Pos: 16485
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  6. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster2 with respect to cnDBTier cluster3.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel:
    $ kubectl -n cluster2 exec -it ndbmysqld-3 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.14.15
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000008
              Read_Source_Log_Pos: 13670
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  7. Run the following commands to check the georeplication status of cnDBTier cluster2 with respect to cnDBTier cluster4:
    $ kubectl -n cluster2 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (4002, 4003) AND remote_site_name = "cluster4";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster4         |             4002 |       4500 | 10.233.42.33        | ACTIVE  |        NULL | cluster2  |      2004 | 2022-04-04 19:10:44 |
    | cluster4         |             4003 |       9921 | 10.233.48.56        | STANDBY |        NULL | cluster2  |      2005 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  8. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster2 with respect to cnDBTier cluster4.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster2 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.42.33
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 34765
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  9. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster2 with respect to cnDBTier cluster4.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel:
    $ kubectl -n cluster2 exec -it ndbmysqld-5 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.48.56
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 24672
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No

Checking the Georeplication Status of cnDBTier cluster3 with Remote Sites

Perform the following steps to check the georeplication status of cnDBTier cluster3 with remote sites (cnDBTier cluster1, cnDBTier cluster2, and cnDBTier cluster4):

Note:

The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check the georeplication status in cnDBTier cluster3 with remote sites (cnDBTier cluster1, cnDBTier cluster2, and cnDBTier cluster4) for every replication group.
  1. Run the following commands to check the georeplication status of cnDBtier cluster3 with respect to cnDBTier cluster1:
    $ kubectl -n cluster3 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (1002, 1003) AND remote_site_name = "cluster1";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster1         |             1002 |       5533 | 10.233.15.214       | ACTIVE  |        NULL | cluster3  |      3000 | 2022-04-04 19:10:44 |
    | cluster1         |             1003 |       7918 | 10.233.24.249       | STANDBY |        NULL | cluster3  |      3001 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  2. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBtier cluster3 with respect to cnDBTier cluster1.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster3 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.15.214
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 34765
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  3. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBtier cluster3 with respect to cnDBTier cluster1.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel:
    $ kubectl -n cluster3 exec -it ndbmysqld-1 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.24.249
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 24672
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  4. Run the following commands to check the georeplication status of cnDBTier cluster3 with respect to cnDBTier cluster2:
    $ kubectl -n cluster3 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (2002, 2003) AND remote_site_name = "cluster2";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster2         |             2002 |       9981 | 10.233.34.252       | ACTIVE  |        NULL | cluster3  |      3002 | 2022-04-04 19:10:44 |
    | cluster2         |             2003 |       9319 | 10.233.15.228       | STANDBY |        NULL | cluster3  |      3003 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  5. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster3 with respect to cnDBTier cluster2.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster3 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.34.252
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 34765
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  6. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster3 with respect to cnDBTier cluster2.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel:
    $ kubectl -n cluster3 exec -it ndbmysqld-3 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                   Replica_IO_State:
                      Source_Host: 10.233.15.228
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 24672
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  7. Run the following commands to check the georeplication status of cnDBTier cluster3 with respect to cnDBTier cluster4:
    $ kubectl -n cluster3 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (4004, 4005) AND remote_site_name = "cluster4";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster4         |             4004 |       3625 | 10.233.63.121       | ACTIVE  |        NULL | cluster3  |      3004 | 2022-04-04 19:10:44 |
    | cluster4         |             4005 |       3837 | 10.233.40.173       | STANDBY |        NULL | cluster3  |      3005 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  8. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster3 with respect to cnDBTier cluster4.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster3 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.63.121
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 34765
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  9. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster3 with respect to cnDBTier cluster4.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel:
    $ kubectl -n cluster3 exec -it ndbmysqld-5 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.40.173
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 24672
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No

Checking the Georeplication Status of cnDBTier cluster4 with Remote Sites

Perform the following steps to check the georeplication status of cnDBTier cluster4 with remote sites (cnDBTier cluster1, cnDBTier cluster2, and cnDBTier cluster3):

Note:

The following commands and examples are applicable for a single replication channel group only. If multiple replication channel groups are enabled, then check the georeplication status in cnDBTier cluster4 with remote sites (cnDBTier cluster1, cnDBTier cluster2, and cnDBTier cluster3) for every replication group.
  1. Run the following commands to check the georeplication status of cnDBtier cluster4 with respect to cnDBTier cluster1:
    $ kubectl -n cluster4 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (1004, 1005) AND remote_site_name = "cluster1";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster1         |             1004 |       5909 | 10.233.37.108       | ACTIVE  |        NULL | cluster4  |      4000 | 2022-04-04 19:10:44 |
    | cluster1         |             1005 |       1937 | 10.233.11.31        | STANDBY |        NULL | cluster4  |      4001 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  2. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBtier cluster4 with respect to cnDBTier cluster1.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster4 exec -it ndbmysqld-0 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.37.108
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 34765
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  3. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBtier cluster4 with respect to cnDBTier cluster1.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel:
    $ kubectl -n cluster4 exec -it ndbmysqld-1 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.11.31
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 24672
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  4. Run the following commands to check the georeplication status of cnDBTier cluster4 with respect to cnDBTier cluster2:
    $ kubectl -n cluster4 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (2004, 2005) AND remote_site_name = "cluster2";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster2         |             2004 |       4500 | 10.233.15.245       | ACTIVE  |        NULL | cluster4  |      4002 | 2022-04-04 19:10:44 |
    | cluster2         |             2005 |       9921 | 10.233.48.74        | STANDBY |        NULL | cluster4  |      4003 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  5. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster4 with respect to cnDBTier cluster2.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster4 exec -it ndbmysqld-2 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.15.245
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 34765
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  6. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster4 with respect to cnDBTier cluster2.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel:
    $ kubectl -n cluster4 exec -it ndbmysqld-3 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.48.74
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 24672
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No
  7. Run the following commands to check the georeplication status of cnDBTier cluster4 with respect to cnDBTier cluster3:
    $ kubectl -n cluster4 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> select * from replication_info.DBTIER_REPLICATION_CHANNEL_INFO where remote_server_id in (3004, 3005) AND remote_site_name = "cluster3";
    Sample output:
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | remote_site_name | remote_server_id | channel_id | remote_signaling_ip | role    | start_epoch | site_name | server_id | start_ts            |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    | cluster3         |             3004 |       3625 | 10.233.3.32         | ACTIVE  |        NULL | cluster4  |      4004 | 2022-04-04 19:10:44 |
    | cluster3         |             3005 |       3837 | 10.233.14.89        | STANDBY |        NULL | cluster4  |      4005 | 2022-04-04 19:10:45 |
    +------------------+------------------+------------+---------------------+---------+-------------+-----------+-----------+---------------------+
    2 rows in set (0.00 sec)
  8. Run the following commands to check if replication is turned on in the ACTIVE Replication channel of cnDBTier cluster4 with respect to cnDBTier cluster3.

    Note:

    When replication is turned on, the Replica_IO_Running and Replica_SQL_Running parameters are set to Yes.
    The following example describes the commands to check the replication status of the ACTIVE Replication channel:
    $ kubectl -n cluster4 exec -it ndbmysqld-4 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State: Waiting for master to send event
                      Source_Host: 10.233.3.32
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 34765
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 11203
            Relay_Source_Log_File: mysql-bin.000007
               Replica_IO_Running: Yes
              Replica_SQL_Running: Yes
                        ....
                        ....
                        ....
        Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
               Source_Retry_Count: 86400
                        ....
                        ....
  9. Run the following commands to check if replication is turned off in the STANDBY Replication channel of cnDBTier cluster4 with respect to cnDBTier cluster3.

    Note:

    When replication is turned off, the Replica_IO_Running and Replica_SQL_Running parameters are set to No.
    The following example describes the commands to check the replication status of the STANDBY Replication channel :
    $ kubectl -n cluster4 exec -it ndbmysqld-5 -- bash
    $ mysql -h 127.0.0.1 -uroot -p
    Password:
    mysql> SHOW REPLICA STATUS\G;
    Sample output:
    *************************** 1. row ***************************
                 Replica_IO_State:
                      Source_Host: 10.233.14.89
                      Source_User: occnerepluser
                      Source_Port: 3306
                    Connect_Retry: 60
                  Source_Log_File: mysql-bin.000005
              Read_Source_Log_Pos: 24672
                   Relay_Log_File: mysql-relay-bin.000002
                    Relay_Log_Pos: 2228
            Relay_Source_Log_File: mysql-bin.000005
               Replica_IO_Running: No
              Replica_SQL_Running: No

7.12 Changing Authentication Plugin on cnDBTier Sites

Users created on cnDBTier setups older than 23.4.x (that is, 23.2.x, and 23.3.0) use the mysql_native_password plugin for authentication. As this plugin is deprecated in mysql version 8.0.34 (cnDBTier 23.3.0), you must use the caching_sha2_password plugin for user authentication. This section provides the steps to change the authentication plugin for users created on cnDBTier setups older than 23.3.1.

Note:

  • Perform this procedure on the ndbappmysqld pod of one site only.
  • If you have upgraded to 24.1.x from a cnDBTier version older than 23.3.1, you must perform this procedure to change the authentication plugin after all the sites are upgraded.

Perform the following steps to alter a user to use the caching_sha2_password authentication plugin:

  1. Log in to the ndbappmysqld pod:
    $ kubectl -n <namespace> exec -it ndbappmysqld-0 -- mysql -h::1 -uroot -p<password>
    where,
    • <namespace> is the namespace name
    • <password> is the password to access the ndbappmysqld pod
    For example:
    $ kubectl -n occne-cndbtier exec -it ndbappmysqld-0 -- mysql -h::1 -uroot -p<password>
  2. Run the following MySQL command to change the authentication plugin:
    mysql> ALTER USER IF EXISTS <USER_NAME> IDENTIFIED WITH 'caching_sha2_password' BY '<password>';
    where,
    • <USER NAME> is the name of the user.
    • <password> is the password of user.
    For example:
    mysql> ALTER USER IF EXISTS occneuser IDENTIFIED WITH 'caching_sha2_password' BY 'NextGenCne';
    mysql> ALTER USER IF EXISTS occnerepluser IDENTIFIED WITH 'caching_sha2_password' BY 'NextGenCne';
    mysql> ALTER USER IF EXISTS root@localhost IDENTIFIED WITH 'caching_sha2_password' BY 'NextGenCne'; 
    mysql> ALTER USER IF EXISTS root IDENTIFIED WITH 'caching_sha2_password' BY 'NextGenCne';
    mysql> ALTER USER IF EXISTS <NF_USER> IDENTIFIED WITH 'caching_sha2_password' BY 'NextGenCne';
If you want to roll back to cnDBTier 23.2.x, 23.3.x, 23.4.x, or 24.1.x after performing the above procedure, then you must change the authentication plugin to mysql_native_password before performing a rollback:
  1. Log in to the ndbappmysqld pod:
    $ kubectl -n <namespace> exec -it ndbappmysqld-0 -- mysql -h::1 -uroot -p<password>
    where,
    • <namespace> is the namespace name.
    • <password> is the password to access the ndbappmysqld pod.
    For example:
    $ kubectl -n occne-cndbtier exec -it ndbappmysqld-0 -- mysql -h::1 -uroot -p<password>
  2. Run the following mysql command to change the authentication plugin to mysql_native_password:
    mysql> ALTER USER IF EXISTS <USER_NAME> IDENTIFIED WITH 'mysql_native_password' BY '<password>';
    where,
    • <USER NAME> is the name of the user.
    • <password> is the password of the user.
    For example:
    mysql> ALTER USER IF EXISTS occneuser IDENTIFIED WITH 'mysql_native_password' BY 'NextGenCne';
    mysql> ALTER USER IF EXISTS occnerepluser IDENTIFIED WITH 'mysql_native_password' BY 'NextGenCne';
    mysql> ALTER USER IF EXISTS root@localhost IDENTIFIED WITH 'mysql_native_password' BY 'NextGenCne';
    mysql> ALTER USER IF EXISTS <NF_USER> IDENTIFIED WITH 'caching_sha2_password' BY 'NextGenCne';