4 Upgrading cnDBTier
This chapter describes the procedure to upgrade an existing cnDBTier deployment.
Note:
TheOCCNE_NAMESPACE
variable in the upgrade procedures must be set to
the cnDBTier namespace. Before running any command that contains the
OCCNE_NAMESPACE
variable, ensure that you have set this variable to
the cnDBTier namespace as stated in the following code
block:export OCCNE_NAMESPACE=<namespace>
where,
<namespace>
is the cnDBTier namespace.
4.1 Supported Upgrade Paths
The following table provides the upgrade paths that are supported by cnDBTier Release 23.4.7.
Table 4-1 Supported Upgrade Paths
Source Release | Target Release |
---|---|
23.4.x | 23.4.7 |
23.3.x | 23.4.7 |
23.2.x | 23.4.7 |
4.2 Upgrading cnDBTier Clusters
This section describes the procedure to upgrade the cnDBTier clusters.
Note:
- If you are upgrading cnDBTier using CDCS:
- note that CDCS supports upgrading cnDBTier clusters with an upgrade service account only.
- perform all the steps in this section except the Helm upgrade command (Step 4 in Upgrading cnDBTier Clusters with an Upgrade Service Account) and then start the CDCS upgrade pipeline. For more information on upgrading cnDBTier using CDCS, see Oracle Communications CD Control Server User Guide.
- If your cnDBTier is configured with a single replication channel, then perform upgrade using a single replication channel group. If your cnDBTier is configured with multiple replication channel groups, then perform the upgrade using multiple replication channel groups.
- Upgrading from 23.2.x or 23.3.x to 23.4.7 reduces the number of MGM pods from three to two. This leaves the PVC associated with the deleted MGM pod, usually ndb_mgmd-2, untouched. Make sure to delete this PVC when its data is no longer needed. cnDBTier no longer uses this MGM pod after the upgrade.
- As of 23.4.7, the upgrade
service account requires persistentvolumeclaims in
its
rules.resources
. This rule is necessary for the postrollback hook to delete mysqld PVCs when rolling back to an earlier MySQL release. - The recommended value of
HeartbeatIntervalDbDb
is 500. Check the value ofHeartbeatIntervalDbDb
in the running cnDBtier instance. If the value is not set to 500, then perform the following steps to update the value to 500:- Modify the value of
HeartbeatIntervalDbDb
to 2500 in the custom values file located in the/global/additionalndbconfigurations/ndb/
directory. - Perform an upgrade by following the procedure given in this section.
- When the upgrade completes
successfully, modify the value of
HeartbeatIntervalDbDb
to 1000 in the custom values file located in the/global/additionalndbconfigurations/ndb/
directory. - Perform a cnDBTier upgrade by following the procedure given in this section.
- When the upgrade completes
successfully, modify the value of
HeartbeatIntervalDbDb
to 500 in the custom values file located in the/global/additionalndbconfigurations/ndb/
directory. - Perform a cnDBTier upgrade by following the procedure given in this section.
- Modify the value of
- db-backup-manager-svc is designed to automatically restart in case of errors. Therefore, when the backup-manager-svc encounters a temporary error during the upgrade process, it may undergo several restarts. When cnDBTier reaches a stable state, the db-backup-manager-svc pod operates normally without any further restarts.
- If you want to enable secure transfer of backups to
remote server, then:
- configure the remote server or storage with SFTP.
- provide the path where the files must be stored and necessary permissions for cnDBTier to copy the backups on the remote server.
- configure Private and Public Key to access remote server where SFTP is installed.
- provide the details configured in the previous points, except Public Key, to cnDBTier either during a fresh install or upgrade so that cnDBTier can store the backups remotely.
- note that the password used for encryption of
the backups isn't stored in the remote server if
backup encryption is enabled.
For more information about the configuration parameters, see Global Parameters.
- If you have enabled secure transfer of backups to
remote server, then make note of the following:
- cnDBTier doesn't purge the backups that are pushed to remote server. Therefore, when necessary, make sure you manually purge the old backups in remote server.
- cnDBTier doesn't transfer any existing backups taken using the old cnDBTier version to a remote server or storage.
- cnDBTier supports secure transfer of backups to only one remote server.
- You can upgrade only one georedundant cnDBTier site at a time. If you want to upgrade multiple georedundant sites, then complete the upgrade of one georedundant site and then move to the next one.
- If you are upgrading from a release older than 23.4.x, perform the
following steps:
- Deactivate the network policy feature from the
custom_values.yaml
file before performing an upgrade:global: networkpolicy: enabled: false
- [Optional]: Once you successfully upgrade to 23.4.x, you can reenable
network policy in the
custom_values.yaml
file by following the upgrade procedure again:global: networkpolicy: enabled: true
- Deactivate the network policy feature from the
- Upgrade is not supported on cnDBTier setups if TLS
is enabled in the
custom_values.yaml
file. - Upgrade is not supported from a cnDBTier release where password encryption is enabled to a cnDBTier release where password encryption is disabled and vice-versa.
Assumption
- All NDB pods of the cnDBTier cluster are up and running.
- The start node ID must be the same as the existing start node ID. Get the start node
ID from the existing cluster using the following
command:
As per the following example, the start node ID must be 49 for management, 56 for georeplication SQL, and 70 for nongeoreplication SQL pods.
Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=1 @10.233.94.13 (mysql-8.0.32 ndb-8.0.32, Nodegroup: 0, *) id=2 @10.233.124.12 (mysql-8.0.32 ndb-8.0.32, Nodegroup: 0) [ndb_mgmd(MGM)] 2 node(s) id=49 @10.233.124.11 (mysql-8.0.32 ndb-8.0.32) id=50 @10.233.93.14 (mysql-8.0.32 ndb-8.0.32) [mysqld(API)] 8 node(s) id=56 @10.233.123.15 (mysql-8.0.32 ndb-8.0.32) id=57 @10.233.94.14 (mysql-8.0.32 ndb-8.0.32) id=70 @10.233.120.20 (mysql-8.0.32 ndb-8.0.32) id=71 @10.233.95.22 (mysql-8.0.32 ndb-8.0.32) id=222 (not connected, accepting connect from any host) id=223 (not connected, accepting connect from any host) id=224 (not connected, accepting connect from any host) id=225 (not connected, accepting connect from any host)
Note:
Node IDs 222 to 225 in the sample output are shown as "not connected" as they are added as empty slot IDs used for georeplication recovery.
Procedure
To upgrade the cnDBTier clusters, perform the following:
- Download the latest cnDBTier package for upgrade.
For more information about downloading the procedure, see Downloading cnDBTier Package.
- Create the SSH keys and the secrets for these keys. For creating the keys and secrets, see Creating SSH Keys and Creating Secrets.
- Before performing the upgrade, set the https mode and DB
encryption to false in the custom_values.yaml file to disable
the https mode and DB encryption, as
follows:
https: enable: false encryption: enable: false
- Before performing the upgrade, run the Helm test on the
current cnDBTier at all sites, only if the Helm test is success on
all sites proceed with the upgrade.
$ helm test mysql-cluster --namespace ${OCCNE_NAMESPACE}
Sample output:NAME: mysql-cluster LAST DEPLOYED: Tue Nov 07 10:22:58 2023 NAMESPACE: occne-cndbtier STATUS: deployed REVISION: 1 TEST SUITE: mysql-cluster-node-connection-test Last Started: Tue Nov 07 10:22:58 2023 Last Completed: Tue Nov 07 10:22:58 2023 Phase: Succeeded
- If you are performing an upgrade with fixed LoadBalancer IP
for external services, then find the IP addresses of the current
cnDBTier cluster by running the following
command:
$ kubectl get svc -n ${OCCNE_NAMESPACE}
- Configure the LoadBalancer IP addresses that you obtained
in the previous step in the
custom_values.yaml
file by following the cnDBTier configurations provided in the Customizing cnDBTier section. - If you want to enable backup encryption, perform the following:
- Perform Step 5 of Creating Secrets.
- Set the
"/global/backupencryption/enable"
parameter in thecustom_values.yaml
file to true.
- If Kubernetes version is above or equal to 1.25 and Kyverno
is supported or installed on Kubernetes, then run the following
commands as applicable. Otherwise, skip this step.
- If ASM or Istio is installed or running on
Kubernetes, then run the following
command:
$ kubectl apply -f namespace/occndbtier_kyvernopolicy_asm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- If ASM or Istio is not installed or running
on Kubernetes, then run the following
command:
$ kubectl apply -f namespace/occndbtier_kyvernopolicy_nonasm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- If ASM or Istio is installed or running on
Kubernetes, then run the following
command:
- If you want to enable secure transfer of backups to remote server, then
perform the following steps:
- Configure the following global parameters in the
custom_values.yaml
to enable secure transfer of backups to remote server:- "/global/remotetransfer/enable"
- "/global/remotetransfer/faultrecoverybackuptransfer"
- "/global/remotetransfer/remoteserverip"
- "/global/remotetransfer/remoteserverport"
- "/global/remotetransfer/remoteserverpath"
- Create the remote server user name and private key secrets by following step 6 of the Creating Secrets procedure.
Note:
Automated cnDBTier upgrade needs a service account for pod rolling restart and patch. If you want to perform an automated cnDBTier upgrade with a service account, then follow the steps given in the Upgrading cnDBTier Clusters with an Upgrade Service Account section. If you want to upgrade cnDBTier manually without using a service account, then follow the steps given in the Upgrading cnDBTier Clusters without an Upgrade Service Account section. - Configure the following global parameters in the
Upgrading cnDBTier Clusters with an Upgrade Service Account
- Create an upgrade service account if you don't have
one already. Skip this step if you have a service account
with the right role to use for an upgrade. You can check the
details of your service account and role in the
namespace/rbac.yaml
file:- Run the following Helm command and
note the RELEASE_NAME that is displayed under the
NAME
column:
helm -n ${OCCNE_NAMESPACE} list
- Set the ${OCCNE_RELEASE_NAME}
environment variable with the Helm value of
RELEASE_NAME that you obtained from Step
a.
export OCCNE_RELEASE_NAME="mysql-cluster"
- Update the service account, role,
and rolebinding for upgrade in the
namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
file:Note:
You can find thenamespace
directory at either/Artifacts/Scripts/
or/Scripts/
relative path depending on the CSAR package type.sed -i "s/occne-cndbtier/${OCCNE_NAMESPACE}/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml sed -i "s/mysql-cluster/${OCCNE_RELEASE_NAME}/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml sed -i "s/cndbtier-upgrade/${OCCNE_RELEASE_NAME}-upgrade/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml sed -i "s/rolebinding/role/" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
- Create upgrade service account,
upgrade role, and upgrade
rolebinding:
kubectl -n ${OCCNE_NAMESPACE} apply -f namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
- Run the following Helm command and
note the RELEASE_NAME that is displayed under the
NAME
column:
- Configure the upgrade service account in your
custom_values.yaml
file by following one of the following options:- If you have
created the upgrade service account as mentioned
in Step 1, run the following commands to configure
the account:
- Run the following command and
note the RELEASE_NAME mentioned in the NAME
column:
helm -n ${OCCNE_NAMESPACE} list
- Set the ${OCCNE_RELEASE_NAME}
environment variable with the Helm value of
RELEASE_NAME that you obtained in Step
i:
export OCCNE_RELEASE_NAME="mysql-cluster"
- Run the following command to
navigate to the cluster
directory:
cd /var/occne/cluster/${OCCNE_CLUSTER}
- Update the service account
information in your custom_values.yaml
file:
sed -i "/ serviceAccountForUpgrade:/,/^$/ { /name:/ s/cndbtier-upgrade-serviceaccount/${OCCNE_RELEASE_NAME}-upgrade-serviceaccount/ }" occndbtier/custom_values.yaml
- Run the following command and
note the RELEASE_NAME mentioned in the NAME
column:
- If you have a previously created
service account, edit your custom_values.yaml
file, and set
global.serviceAccountForUpgrade.create
to false andglobal.serviceAccountForUpgrade.name
to the name of your service account. - If you are upgrading to cnDBTier 23.4.x, freshly installing
23.3.x, freshly
installing 23.2.x, or upgraded
to 23.3.x from a fresh
install of 23.2.x, then you
must already have a service account for upgrade.
You can keep the
custom_values.yaml
file with the same values used for the previous installation or upgrade of cnDBTier.
- If you have
created the upgrade service account as mentioned
in Step 1, run the following commands to configure
the account:
- Upgrade the cnDBTier by running
the commands below:
- Run the following command and make a note of the
release name from the NAME column in the
output:
helm -n ${OCCNE_NAMESPACE} list export OCCNE_RELEASE_NAME="mysql-cluster"
- Navigate to the cluster
directory:
cd /var/occne/cluster/${OCCNE_CLUSTER}
- Run the following command to start
the upgrade:
Note:
Replace the${OCCNE_RELEASE_NAME}
environment variable in the command with the Helm value of RELEASE_NAME that you obtained in Step a.helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml
- Run the following command and make a note of the
release name from the NAME column in the
output:
- Wait for all the MGM and NDB pods to restart.
- Run the following command to perform a rollout restart of the
NDB pods. This restart of the NDB pods is required for the
updated HeartbeatDbDb order to take effect. This step is
required only if you are upgrading from a release that
doesn't have this
feature:
kubectl -n $DBTIER_NAMESPACE rollout restart statefulset ndbmtd
- Wait for all the NDB pods to restart.
- Verify the cluster status from the management pod by
running the following
command:
$ kubectl -n ${OCCNE_NAMESPACE} exec -it ndbmgmd-0 -- ndb_mgm -e show
Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=1 @10.233.104.176 (mysql-8.0.35 ndb-8.0.35, Nodegroup: 0) id=2 @10.233.121.175 (mysql-8.0.35 ndb-8.0.35, Nodegroup: 0, *) [ndb_mgmd(MGM)] 2 node(s) id=49 @10.233.101.154 (mysql-8.0.35 ndb-8.0.35) id=50 @10.233.104.174 (mysql-8.0.35 ndb-8.0.35) [mysqld(API)] 8 node(s) id=56 @10.233.92.169 (mysql-8.0.35 ndb-8.0.35) id=57 @10.233.101.155 (mysql-8.0.35 ndb-8.0.35) id=70 @10.233.92.170 (mysql-8.0.35 ndb-8.0.35) id=71 @10.233.121.176 (mysql-8.0.35 ndb-8.0.35) id=222 (not connected, accepting connect from any host) id=223 (not connected, accepting connect from any host) id=224 (not connected, accepting connect from any host) id=225 (not connected, accepting connect from any host)
Note:
Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. - Run the following Helm test command to verify if the
cnDBTier services are upgraded
successfully:
$ helm test mysql-cluster --namespace ${OCCNE_NAMESPACE}
Sample output:NAME: mysql-cluster LAST DEPLOYED: Tue Nov 07 10:22:58 2023 NAMESPACE: occne-cndbtier STATUS: deployed REVISION: 1 TEST SUITE: mysql-cluster-node-connection-test Last Started: Tue Nov 07 10:22:58 2023 Last Completed: Tue Nov 07 10:22:58 2023 Phase: Succeeded
Upgrading cnDBTier Clusters Without an Upgrade Service Account
- Configure the custom_values.yaml file to disable
upgrade service
account:
cd /var/occne/cluster/${OCCNE_CLUSTER} # Update the service account information in your custom_values.yaml file sed -i "/ serviceAccountForUpgrade:/,/^$/ { /create:/ s/true/false/; /name:/ s/cndbtier-upgrade-serviceaccount// }" occndbtier/custom_values.yaml
Alternatively, edit the custom_values.yaml file, and manually set
global.serviceAccountForUpgrade.create
to false andglobal.serviceAccountForUpgrade.name
to""
(empty). - If you are upgrading from a previous cnDBTier
release, perform the following steps to apply the schema
changes and run the preupgrade script:
- Run the following command on the
Bastion Host to apply the schema changes.
Note:
Replace the values of the environment variables in the following commands with the values corresponding to your cluster.export OCCNE_NAMESPACE="occne-cndbtier" export MYSQL_CONNECTIVITY_SERVICE="mysql-connectivity-service" export MYSQL_USERNAME="occneuser" export MYSQL_PASSWORD="<password for the user occneuser>" export DBTIER_REPLICATION_SVC_DATABASE="replication_info" export DBTIER_BACKUP_SVC_DATABASE="backup_info" export DBTIER_HBREPLICAGROUP_DATABASE="hbreplica_info" export REPLCHANNEL_GROUP_COUNT=<configured number of replication channel groups i.e either 1/2/3> export MYSQL_CMD="kubectl -n <namespace> exec <ndbmysqld-0/ndbappmysqld-0 pod name> -- mysql --binary-as-hex=0 --show-warnings" occndbtier/files/hooks.sh --schema-upgrade
- Run the following commands on
Bastion Host to run the preupgrade procedure.
Note:
Replace the values of the environment variables in the following commands with the values corresponding to your cluster.export OCCNE_NAMESPACE="occne-cndbtier" export NDB_MGMD_PODS="ndbmgmd-0 ndbmgmd-1" occndbtier/files/hooks.sh --pre-upgrade
- Run the following command on the
Bastion Host to apply the schema changes.
- Run the following commands to upgrade the
cnDBTier:
- Run the following command to obtain
the Helm value of
RELEASE_NAME.
helm -n ${OCCNE_NAMESPACE} list export OCCNE_RELEASE_NAME="mysql-cluster"
The RELEASE_NAME can be found under the NAME column in the output.
- Run the following command to
upgrade
cnDBTier:
cd /var/occne/cluster/${OCCNE_CLUSTER} helm -n ${OCCNE_NAMESPACE} upgrade ${OCCNE_RELEASE_NAME} occndbtier -f occndbtier/custom_values.yaml --no-hooks
where,
${OCCNE_RELEASE_NAME}
is the Helm value of RELEASE_NAME obtained in Step a.
- Run the following command to obtain
the Helm value of
RELEASE_NAME.
- Perform the following steps to
run the postupgrade script. This deletes all MGM pods, waits
for them to come up, and patches upgradeStrategy.
Note:
Replace the values of the environment variables in the following commands with the values corresponding to your cluster.- Define the following environment
variables:
export OCCNE_NAMESPACE="occne-cndbtier" #export API_EMP_TRY_SLOTS_NODE_IDS="id=222" export API_EMP_TRY_SLOTS_NODE_IDS="id=222\|id=223\|id=224\|id=225" export MGM_NODE_IDS="id=49\|id=50"
- Export all the
ndbmtd
node IDs in the following environment variable:export NDB_NODE_IDS="id=1\|id=2"
- Export all the
ndbmysqld
node IDs in the following environment variables:Note:
ndbmysqld
node IDs starts atglobal.api.startNodeId
and ends at (global.api.startNodeId + global.apiReplicaCount - 1
)export API_NODE_IDS="id=56\|id=57" export NDB_MGMD_PODS="ndbmgmd-0 ndbmgmd-1" export NDB_MTD_PODS="ndbmtd-0 ndbmtd-1" export NDB_STS_NAME="ndbmtd" export API_STS_NAME="ndbmysqld" export APP_STS_NAME="ndbappmysqld"
- If auto scaling is enabled for
ndbapp sts, then additionally define the following
environment
variables:
#export NDBAPP_START_NODE_ID="<as configured in values.yaml: global.ndbapp.startNodeId>" #export NDBAPP_REPLICA_MAX_COUNT="<as configured in values.yaml: global.ndbappReplicaMaxCount>"
- If
values.global.ndbapp.ndb_cluster_connection_pool
is greater than one, then declare the following environment variable:export APP_CON_POOL_INGORE_NODE_IDS="id=100\|id=101\|id=102 ... \|id=(n-1)\|id=n"
where,
n
is calculated using the following formula:n = 100 + (((values.global.ndbapp.ndb_cluster_connection_pool - 1) * values.global.ndbappReplicaMaxCount) - 1)
. - Run the postupgrade
script:
occndbtier/files/hooks.sh --post-upgrade
- Define the following environment
variables:
- Wait for all the MGM and NDB pods to restart.
- Run the following command to perform a rollout restart of the
NDB pods. This restart of the NDB pods is required for the
updated HeartbeatDbDb order to take effect. This step is
required only if you are upgrading from a release that
doesn't have this
feature:
kubectl -n $DBTIER_NAMESPACE rollout restart statefulset ndbmtd
- Wait for all the NDB pods to restart.
- Run the following command to verify the cluster
status from the management
pod:
$ kubectl -n ${OCCNE_NAMESPACE} exec -it ndbmgmd-0 -- ndb_mgm -e show
Sample output:Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=1 @10.233.104.176 (mysql-8.0.35 ndb-8.0.35, Nodegroup: 0) id=2 @10.233.121.175 (mysql-8.0.35 ndb-8.0.35, Nodegroup: 0, *) [ndb_mgmd(MGM)] 2 node(s) id=49 @10.233.101.154 (mysql-8.0.35 ndb-8.0.35) id=50 @10.233.104.174 (mysql-8.0.35 ndb-8.0.35) [mysqld(API)] 8 node(s) id=56 @10.233.92.169 (mysql-8.0.35 ndb-8.0.35) id=57 @10.233.101.155 (mysql-8.0.35 ndb-8.0.35) id=70 @10.233.92.170 (mysql-8.0.35 ndb-8.0.35) id=71 @10.233.121.176 (mysql-8.0.35 ndb-8.0.35) id=222 (not connected, accepting connect from any host) id=223 (not connected, accepting connect from any host) id=224 (not connected, accepting connect from any host) id=225 (not connected, accepting connect from any host)
Note:
Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. - Run the following Helm test command to verify if
the cnDBTier services are upgraded
successfully:
$ helm test mysql-cluster --namespace ${OCCNE_NAMESPACE}
Sample output:$ helm test mysql-cluster --namespace ${OCCNE_NAMESPACE} NAME: mysql-cluster LAST DEPLOYED: Tue Nov 07 10:22:58 2023 NAMESPACE: occne-cndbtier STATUS: deployed REVISION: 1 TEST SUITE: mysql-cluster-node-connection-test Last Started: Tue Nov 07 10:22:58 2023 Last Completed: Tue Nov 07 10:22:58 2023 Phase: Succeeded
- After successful upgrade, update the alerts and Grafana dashboards using Postinstallation Tasks.