2 Installing cnDBTier
This chapter provides information about installing cnDBTier in a cloud native environment.
Note:
- cnDBTier 24.3.1 supports both fresh installation and upgrade from 24.1.x or 24.2.x. For more information on how to upgrade cnDBTier, see Upgrading cnDBTier.
- The
OCCNE_NAMESPACE
variable in the installation procedures must be set to the cnDBTier namespace. Before running any command that contains theOCCNE_NAMESPACE
variable, ensure that you have set this variable to the cnDBTier namespace as stated in the following code block:export OCCNE_NAMESPACE=<namespace>
where,
<namespace>
is the cnDBTier namespace. - The namespace name
"occne-cndbtier"
given in the installation procedures is only an example. Ensure that you configure the namespace name according to your environment.
- CNE Cluster 1: First CNE Cluster on which cnDBTier is installed to establish georeplication
- CNE Cluster 2: Second CNE Cluster on which cnDBTier is installed to establish georeplication
- CNE Cluster 3: Third CNE Cluster on which cnDBTier is installed to establish georeplication
- CNE Cluster 4: Fourth CNE cluster on which cnDBTier is installed to establish georeplication
Note:
When you enable secure transfer of backups to a remote server, then:- cnDBTier doesn't purge the backups that are pushed to remote server. Therefore, when necessary, make sure you manually purge the old backups in the remote server.
- cnDBTier doesn't transfer any existing backups taken using the old cnDBTier version to the remote server or storage.
- cnDBTier supports secure transfer of backups to only one remote server.
2.1 Prerequisites
Before installing and configuring cnDBTier, ensure that the following prerequisites are met.
Note:
- cnDBTier can be installed on any Kubernetes environment with a dynamic Kubernetes storage.
- All the replication service pods of cnDBTier cluster must be able to communicate with all replication service pods of the mate cnDBTier clusters.
- If you want to enable secure transfer of backups to remote
server, then:
- configure the remote server or storage with SFTP.
- provide the path where the files must be stored and necessary permissions for cnDBTier to copy the backups on the remote server.
- configure the Private and Public Key to access remote server where SFTP is installed.
- provide the details configured in the previous points, except Public Key, to cnDBTier either during a fresh install or upgrade so that cnDBTier can store the backups remotely.
- note that the password used for encryption of the
backups isn't stored in the remote server if backup encryption is
enabled.
For more information about the configuration parameters, see Global Parameters.
- cnDBTier 24.3.1 supports Helm 3.12.3 and 3.13.2. Ensure that you upgrade Helm to a supported version.
2.1.1 Setting Up the Environment Variables
This section describes the environment setup requirements for installing cnDBTier.
- Log in to the bastion host and export the following environment
variables before
installation:
export OCCNE_CLUSTER=<cluster_name> export OCCNE_VERSION=<OCCNE_VERSION> # OCCNE_VERSION is the cnDBTier version, for example: 24.3.1 export OCCNE_REPOSITORY=<docker_repository> # Export it to the repository depending on your cloud environment. If you are using CNE, export this variable to the CNE repository. export OCCNE_DOMAIN=<domain_name> export OCCNE_NAMESPACE=<namespace> export OCCNE_SITE_ID=<siteid> # For example: 1 for cnDBTier cluster 1, 2 for cnDBTier cluster 2, 3 for cnDBTier cluster 3, 4 for cnDBTier cluster 4 export OCCNE_SITE_NAME=<sitename> # normally chicago export OCCNE_MATE_SITE_NAME=<matesitename> # normally atlantic, pacific, redsea
For configuring the second mate site in case of 3-site replication, export the following:export ENABLE_SECOND_MATE_SITE=<enablesecondmatesite> # true or false export OCCNE_SECOND_MATE_SITE_NAME=<secondmatesitename> # normally pacific, atlantic, redsea
For configuring the third mate site in case of 4-site replication, export the following:export OCCNE_ENABLE_THIRD_MATE_SITE=<enablethirdmatesite> # true or false export OCCNE_THIRD_MATE_SITE_NAME=<thirdmatesitename> # normally pacific, atlantic, redsea
Example:- If you are deploying site 1, then export the environment
variables as
follows:
export OCCNE_CLUSTER=occne_cndbtierone export OCCNE_VERSION=24.3.1 export OCCNE_REPOSITORY=occne1-cgbu-cne-dbtier-bastion-1:5000/occne export OCCNE_DOMAIN=occne1-cgbu-cne-dbtier export OCCNE_NAMESPACE=occne-cndbtierone export OCCNE_SITE_ID=1 export OCCNE_SITE_NAME=chicago export OCCNE_MATE_SITE_NAME=atlantic export OCCNE_ENABLE_SECOND_MATE_SITE=true export OCCNE_SECOND_MATE_SITE_NAME=pacific export OCCNE_ENABLE_THIRD_MATE_SITE=true export OCCNE_THIRD_MATE_SITE_NAME=redsea
- If you are deploying site 2, then export the
environment variables as
follows:
export OCCNE_CLUSTER=occne_cndbtiertwo export OCCNE_VERSION=24.3.1 export OCCNE_REPOSITORY=occne2-cgbu-cne-dbtier-bastion-1:5000/occne export OCCNE_DOMAIN=occne2-cgbu-cne-dbtier export OCCNE_NAMESPACE=occne-cndbtiertwo export OCCNE_SITE_ID=2 export OCCNE_SITE_NAME=atlantic export OCCNE_MATE_SITE_NAME=chicago export OCCNE_ENABLE_SECOND_MATE_SITE=true export OCCNE_SECOND_MATE_SITE_NAME=pacific export OCCNE_ENABLE_THIRD_MATE_SITE=true export OCCNE_THIRD_MATE_SITE_NAME=redsea
- If you are deploying site 3, then export the environment
variables as
follows:
export OCCNE_CLUSTER=occne_cndbtierthree export OCCNE_VERSION=24.3.1 export OCCNE_REPOSITORY=occne3-cgbu-cne-dbtier-bastion-1:5000/occne export OCCNE_DOMAIN=occne3-cgbu-cne-dbtier export OCCNE_NAMESPACE=occne-cndbtierthree export OCCNE_SITE_ID=3 export OCCNE_SITE_NAME=pacific export OCCNE_MATE_SITE_NAME=chicago export OCCNE_ENABLE_SECOND_MATE_SITE=true export OCCNE_SECOND_MATE_SITE_NAME=atlantic export OCCNE_ENABLE_THIRD_MATE_SITE=true export OCCNE_THIRD_MATE_SITE_NAME=redsea
- If you are deploying site 4, then export the environment
variables as
follows:
export OCCNE_CLUSTER=occne_cndbtierfour export OCCNE_VERSION=24.3.1 export OCCNE_REPOSITORY=occne3-cgbu-cne-dbtier-bastion-1:5000/occne export OCCNE_DOMAIN=occne3-cgbu-cne-dbtier export OCCNE_NAMESPACE=occne-cndbtierfour export OCCNE_SITE_ID=4 export OCCNE_SITE_NAME=redsea export OCCNE_MATE_SITE_NAME=chicago export OCCNE_ENABLE_SECOND_MATE_SITE=true export OCCNE_SECOND_MATE_SITE_NAME=atlantic export OCCNE_ENABLE_THIRD_MATE_SITE=true export OCCNE_THIRD_MATE_SITE_NAME=pacific
- If you are deploying site 1, then export the environment
variables as
follows:
- Deploy mate site (site 2)While installing MySQL Network DataBase (NDB) on the second site, provide the mate site DB replication service Load Balancer IP as the configuration parameter for the georeplication process to start.
Note:
Skip this step, if this is a single deployment or the first site is in a mated site deployment.To obtain the mate site DB replication service Load Balancer external-IP from site 1:- Log in to the bastion host of the first site.
- Run the following command to retrieve the mate site DB
replication service Load Balancer external-IP from site
1:
kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' <DB replication service Load Balancer IP>
Example:kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' 10.75.224.168
- Deploy mate site (site 3)While installing MYSQL NDB on the third site, for the georeplication process to start, provide the mate site DB replication service Load Balancer IP as the configuration parameter.
Note:
Skip this step, if this is a single deployment or the first site is in a mated site deployment.To obtain the mate site DB replication service Load Balancer external-IP from site 1 and site 2:- Log in to the bastion host of the first site.
- Run the following command to retrieve the mate site DB
replication service Load Balancer external-IP from site
1:
kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_SECOND_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' <DB replication service Load Balancer IP>
Example:kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' 10.75.224.169
- Log in to the bastion host of the second site.
- Run the following command to retrieve the mate site DB
replication service Load Balancer external-IP from site
2:
kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_SECOND_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' <DB replication service Load Balancer IP>
Example:kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' 10.75.224.170
- Deploy mate site (site 4)While installing MYSQL NDB on the third site, for the georeplication process to start, provide the mate site DB replication service Load Balancer IP as the configuration parameter.
Note:
Skip this step, if this is a single deployment or the first site is in a mated site deployment.To obtain the mate site DB replication service Load Balancer external-IP from site 1, site 2 and site 3:- Log in to the bastion host of the first site.
- Run the following command to retrieve the mate site DB
replication service Load Balancer external-IP from site
1:
kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_THIRD_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' <DB replication service Load Balancer IP>
Example:kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_THIRD_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' 10.75.226.121
- Log in to the bastion host of the second site.
- Run the following command to retrieve the mate site DB
replication service Load Balancer external-IP from site
2:
kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_THIRD_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' <DB replication service Load Balancer IP>
Example:kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_THIRD_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' 10.75.226.122
- Log in to the bastion host of the third site.
- Run the following command to retrieve the mate site DB
replication service Load Balancer external-IP from site
3:
kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_THIRD_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' <DB replication service Load Balancer IP>
Example:kubectl -n ${OCCNE_NAMESPACE} get svc | grep ${OCCNE_SITE_NAME}-${OCCNE_THIRD_MATE_SITE_NAME}-replication-svc | awk '{ print $4 }' 10.75.226.123
- Export the OCCNE_MATE_REPLICATION_SVC and
SECOND_MATE_REPLICATION_SVC environment variables:
- If you are deploying site 1, set the environment variables
to ""
(empty):
export OCCNE_MATE_REPLICATION_SVC="" export OCCNE_SECOND_MATE_REPLICATION_SVC="" export OCCNE_THIRD_MATE_REPLICATION_SVC=""
- If deploying mate site (site 2), set the environment variables to EXTERNAL-IP from site 1 (refer to step 4.b):
export OCCNE_MATE_REPLICATION_SVC="10.75.224.168" export OCCNE_SECOND_MATE_REPLICATION_SVC="" export OCCNE_THIRD_MATE_REPLICATION_SVC=""
- If deploying mate site (site 3), then in site 3 bastion host export below variables, set the environment variables to EXTERNAL-IP from site 1 and site 2 (refer to steps 4.b and 4.d):
export OCCNE_MATE_REPLICATION_SVC="10.75.224.169" export OCCNE_SECOND_MATE_REPLICATION_SVC="10.75.224.170" export OCCNE_THIRD_MATE_REPLICATION_SVC=""
- If deploying mate site (site 4), then in site 4 bastion host export below variables, set the environment variables to EXTERNAL-IP from site 1, site 2, and site 3 (refer to steps 4.b, 4.d, and 4.f):
export OCCNE_MATE_REPLICATION_SVC="10.75.226.121" export OCCNE_SECOND_MATE_REPLICATION_SVC="10.75.227.122" export OCCNE_THIRD_MATE_REPLICATION_SVC="10.75.228.123"
- If you are deploying site 1, set the environment variables
to ""
(empty):
2.1.2 Downloading cnDBTier Package
The following procedure explains the steps to download the cnDBTier package from OSDC or MOS and load it to the directory.
- On the bastion host, run the following command to create the root
directory for cnDBTier installation
files:
$ mkdir -p /var/occne/cluster/${OCCNE_CLUSTER}
- Download the cnDBTier CSAR package from OSDC/MOS to the directory
created in the previous step. After successful download, run the following
commands to extract the cnDBTier CSAR package to local disk:
Note:
If CSAR package name downloaded from MOS has the formatp<mos_patch_no>_*_Tekelec.zip
then unzip the package as it contains both cnDBTier CSAR package and cnDBTier CSAR sha256 package. Use theunzip
command to extract the package and get the cnDBTier CSAR package:$unzip <mos_package_file_name> For example: $unzip p33890522_111000_Tekelec.zip
$ mkdir -p /var/occne/cluster/${OCCNE_CLUSTER}/csar_occndbtier_${OCCNE_VERSION} # Unzip the CSAR package into "csar_occndbtier_${OCCNE_VERSION}" directory $ unzip <cnDBTier CSAR package> -d /var/occne/cluster/${OCCNE_CLUSTER}/csar_occndbtier_${OCCNE_VERSION} $ cd /var/occne/cluster/${OCCNE_CLUSTER}
- Depending on the type of CSAR package, the helm chart package can
be located at either
/Artifacts/Scripts/occndbtier-${OCCNE_VERSION}.tgz
or/Files/Helm/occndbtier-${OCCNE_VERSION}.tgz
relative path. Untar the cnDBTier helm package from the relative location into the/var/occne/cluster/${OCCNE_CLUSTER}/
directory:$ tar -xzvf occndbtier-${OCCNE_VERSION}.tgz -C /var/occne/cluster/${OCCNE_CLUSTER}/
- If docker is installed in the Bastion Host, then load all the
docker images to the docker engine on the Bastion Host. Depending on the type of
the CSAR package, the docker images for cnDBTier can be found at either
/Artifacts/Images
or/Files/
relative path.$ docker load -i occne-mysql-cluster-${OCCNE_VERSION}.tar $ docker tag occne/mysql-cluster:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/mysql-cluster:${OCCNE_VERSION} $ docker push ${OCCNE_REPOSITORY}/mysql-cluster:${OCCNE_VERSION} $ docker load -i occne-cndbtier-mysqlndb-client-${OCCNE_VERSION}.tar $ docker tag occne/cndbtier-mysqlndb-client:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/cndbtier-mysqlndb-client:${OCCNE_VERSION} $ docker push ${OCCNE_REPOSITORY}/cndbtier-mysqlndb-client:${OCCNE_VERSION} $ docker load -i occne-db-monitor-svc-${OCCNE_VERSION}.tar $ docker tag occne/db_monitor_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_monitor_svc:${OCCNE_VERSION} $ docker push ${OCCNE_REPOSITORY}/db_monitor_svc:${OCCNE_VERSION} $ docker load -i occne-db-replication-svc-${OCCNE_VERSION}.tar $ docker tag occne/db_replication_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_replication_svc:${OCCNE_VERSION} $ docker push ${OCCNE_REPOSITORY}/db_replication_svc:${OCCNE_VERSION} $ docker load -i occne-db-backup-manager-svc-${OCCNE_VERSION}.tar $ docker tag occne/db_backup_manager_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_backup_manager_svc:${OCCNE_VERSION} $ docker push ${OCCNE_REPOSITORY}/db_backup_manager_svc:${OCCNE_VERSION} $ docker load -i occne-db-backup-executor-svc-${OCCNE_VERSION}.tar $ docker tag occne/db_backup_executor_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_backup_executor_svc:${OCCNE_VERSION} $ docker push ${OCCNE_REPOSITORY}/db_backup_executor_svc:${OCCNE_VERSION} $ docker load -i occne-db-infra-monitor-svc-${OCCNE_VERSION}.tar $ docker tag occne/db_infra_monitor_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_infra_monitor_svc:${OCCNE_VERSION} $ docker push ${OCCNE_REPOSITORY}/db_infra_monitor_svc:${OCCNE_VERSION}
- If podman is installed in the bastion host, then load all the
docker images to the docker engine on this bastion
host:
$ sudo podman load -i occne-mysql-cluster-$( echo ${OCCNE_VERSION} | tr '_' '-' ).tar $ sudo podman tag occne/mysql-cluster:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/mysql-cluster:${OCCNE_VERSION} $ sudo podman push ${OCCNE_REPOSITORY}/mysql-cluster:${OCCNE_VERSION} $ sudo podman image rm ${OCCNE_REPOSITORY}/mysql-cluster:${OCCNE_VERSION} occne/mysql-cluster:${OCCNE_VERSION} $ sudo podman load -i occne-cndbtier-mysqlndb-client-$( echo ${OCCNE_VERSION} | tr '_' '-' ).tar $ sudo podman tag occne/cndbtier-mysqlndb-client:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/cndbtier-mysqlndb-client:${OCCNE_VERSION} $ sudo podman push ${OCCNE_REPOSITORY}/cndbtier-mysqlndb-client:${OCCNE_VERSION} $ sudo podman image rm ${OCCNE_REPOSITORY}/cndbtier-mysqlndb-client:${OCCNE_VERSION} occne/cndbtier-mysqlndb-client:${OCCNE_VERSION} $ sudo podman load -i occne-db-monitor-svc-$( echo ${OCCNE_VERSION} | tr '_' '-' ).tar $ sudo podman tag occne/db_monitor_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_monitor_svc:${OCCNE_VERSION} $ sudo podman push ${OCCNE_REPOSITORY}/db_monitor_svc:${OCCNE_VERSION} $ sudo podman image rm ${OCCNE_REPOSITORY}/db_monitor_svc:${OCCNE_VERSION} occne/db_monitor_svc:${OCCNE_VERSION} $ sudo podman load -i occne-db-replication-svc-$( echo ${OCCNE_VERSION} | tr '_' '-' ).tar $ sudo podman tag occne/db_replication_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_replication_svc:${OCCNE_VERSION} $ sudo podman push ${OCCNE_REPOSITORY}/db_replication_svc:${OCCNE_VERSION} $ sudo podman image rm ${OCCNE_REPOSITORY}/db_replication_svc:${OCCNE_VERSION} occne/db_replication_svc:${OCCNE_VERSION} $ sudo podman load -i occne-db-backup-manager-svc-$( echo ${OCCNE_VERSION} | tr '_' '-' ).tar $ sudo podman tag occne/db_backup_manager_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_backup_manager_svc:${OCCNE_VERSION} $ sudo podman push ${OCCNE_REPOSITORY}/db_backup_manager_svc:${OCCNE_VERSION} $ sudo podman image rm ${OCCNE_REPOSITORY}/db_backup_manager_svc:${OCCNE_VERSION} occne/db_backup_manager_svc:${OCCNE_VERSION} $ sudo podman load -i occne-db-backup-executor-svc-$( echo ${OCCNE_VERSION} | tr '_' '-' ).tar $ sudo podman tag occne/db_backup_executor_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_backup_executor_svc:${OCCNE_VERSION} $ sudo podman push ${OCCNE_REPOSITORY}/db_backup_executor_svc:${OCCNE_VERSION} $ sudo podman image rm ${OCCNE_REPOSITORY}/db_backup_executor_svc:${OCCNE_VERSION} occne/db_backup_executor_svc:${OCCNE_VERSION} $ sudo podman load -i occne-db-infra-monitor-svc-$( echo ${OCCNE_VERSION} | tr '_' '-' ).tar $ sudo podman tag occne/db_infra_monitor_svc:${OCCNE_VERSION} ${OCCNE_REPOSITORY}/db_infra_monitor_svc:${OCCNE_VERSION} $ sudo podman push ${OCCNE_REPOSITORY}/db_infra_monitor_svc:${OCCNE_VERSION} $ sudo podman image rm ${OCCNE_REPOSITORY}/db_infra_monitor_svc:${OCCNE_VERSION} occne/db_infra_monitor_svc:${OCCNE_VERSION}
2.2 Resource Requirement
Note:
- The resource requirement model provided in this section is only a bare minimum requirement to install cnDBTier. The resource requirements for each Network Function (NF) vary based on their features, capacity, and performance requirements. Each NF publishes NF specific cnDBTier resource profiles. Before installation cnDBTier, refer to the NF specific cnDBTier resource profiles in the respective NF documents to know the actual resource requirements.
- All the entries in the following table are given per pod.
- cnDBTier Side Car column in the following table represents the resource requirement of the init-sidecar container running in the respective cnDBTier pod (SQL, DATA, or Replication SVC).
Table 2-1 Default Resource Requirement Model
DBTier Pod | Replica | vCPU | RAM | DBTier Side Car | Storage | Ephemeral Storage | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
- | Min | Max | Min | Max | vCPU | RAM | Ephemeral Storage | PVC | Count | Min | Max | |
SQL (ndbmysqld) k8 Resource Type: StatefulSet Sidecar name: init-sidecar |
|
8 | 8 | 10Gi | 10Gi | 0.1 | 256Mi | Min: 90Mi
Max: 1Gi |
100Gi | 1 | 90Mi | 1Gi |
SQL (ndbappmysqld) k8 Resource Type: Statefulset Sidecar name: init-sidecar |
2 | 8 | 8 | 10Gi | 10Gi | 0.1 | 256Mi | Min: 90Mi
Max: 1Gi |
20Gi | 1 | 90Mi | 1Gi |
MGMT (ndbmgmd) k8 Resource Type: StatefulSet Sidecar name: NA |
2 | 4 | 4 | 8Gi | 10Gi | NA | NA | NA | 15Gi | 1 | 90Mi | 1Gi |
DB (ndbmtd) k8 Resource Type: StatefulSet Sidecar name: db-executor-svc |
4 | 10 | 10 | 16Gi | 18Gi | 0.1 | 256Mi | Min: 90Mi
Max: 1Gi |
60Gi | 2 | 90Mi | 1Gi |
Backup Manager Service (db-backup-manager-svc) Sidecar name: NA k8 Resource Type: Deployment |
1 | 0.1 | 0.1 | 128Mi | 128Mi | NA | NA | NA | NA | NA | 90Mi | 1Gi |
Replication Service (db-replication-svc) Sidecar name: init-discover-sql-ips k8 Resource Type: Deployment |
|
2 for the leader DB replication service 0.6 for the other DB replication services |
2 for the leader DB replication service 1 for the other DB replication services |
12Gi for the leader DB replication service 1024Mi for the other DB replication services |
12Gi for the leader DB replication service 2048Mi for the other DB replication services |
0.2 | 500Mi | Min: 90Mi
Max: 1Gi |
60Gi (Only the leader db-replication-svc pod is assigned with PVC, the rest of the db-replication-svc pods are not assigned with any PVC.) | 1 | 90Mi | 1Gi |
Monitor Service (db-monitor-svc) Sidecar name: NA k8 Resource Type: Deployment |
1 | 4 | 4 | 4Gi | 4Gi | NA | NA | NA | NA | NA | 90Mi | 1Gi |
DBTier helm test (node-connection-test) Sidecar name: NA k8 Resource Type: Job Note: This is created only when Helm test is performed. |
1 | 0.1 | 0.1 | 256Mi | 256Mi | NA | NA | NA | NA | NA | 90Mi | 1Gi |
DBTier Connectivity Service (mysql-connectivity-service) Sidecar name: NA k8 Resource Type: Service |
1 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA |
DBTier Infra-monitor Service (db-infra-monitor-svc) Sidecar for MGMT, DB (ndbmtd), and SQL (ndbmysqld) |
1 | 0.1 | 0.1 | 256Mi | 256Mi | NA | NA | Min: 90Mi
Max: 1Gi |
NA | NA | 90Mi | 1Gi |
2.3 Installation Sequence
This section describes the preinstallation, installation, and postinstallation tasks for cnDBTier.
2.3.1 Preinstallation Tasks
This section explains the preinstallation configuration for cnDBTier.
2.3.1.1 Verifying and Creating Namespace
This section provides information about verifying and creating a namespace in the system.
Note:
- This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.
- This procedure is applicable to all cnDBTier environments except OpenShift. If you are using an OpenShift environment, then use the Verifying and Creating Namespace in an OpenShift Environment procedure to create namespace, service accounts, roles, and role bindings.
- Depending upon the CSAR package type, the
namespace
directory can be found either at/Artifacts/Scripts/
or/Scripts/
relative path.
- Update the
OCCNE_NAMESPACE
namespace in thenamespace/occndbtier_namespace_${OCCNE_VERSION}.yaml
file:$ sed -i "s/occne-cndbtier/${OCCNE_NAMESPACE}/g" namespace/occndbtier_namespace_${OCCNE_VERSION}.yaml
- Update the
OCCNE_NAMESPACE
namespace innamespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
file:$ sed -i "s/occne-cndbtier/${OCCNE_NAMESPACE}/g" namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml
- If the Kubernetes version is below 1.25, then run the following
command to update the
OCCNE_NAMESPACE
namespace innamespace/occndbtier_psp_${OCCNE_VERSION}.yaml
. Otherwise, move to the next step.$ sed -i "s/occne-cndbtier/${OCCNE_NAMESPACE}/g" namespace/occndbtier_psp_${OCCNE_VERSION}.yaml
- Run the following command to create a
namespace:
$ kubectl create namespace ${OCCNE_NAMESPACE}
- Run the following command to create the required service
accounts:
$ kubectl apply -f namespace/occndbtier_namespace_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- If the Kubernetes version is below 1.25, run the following command
to create PSP. Otherwise, move to the next
step.
$ kubectl apply -f namespace/occndbtier_psp_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- Run the following command to create kyverno policies if the
following conditions are applicable, otherwise move to the next step:
- Kubernetes version is above or equal to 1.25
- Kyverno is supported or installed on Kubernetes
- ASM or Istio is installed or running on Kubernetes
$ kubectl apply -f namespace/occndbtier_kyvernopolicy_asm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- Run the following command to create Kyverno policies if the
following conditions are applicable, otherwise move to the next step:
- Kubernetes version is above or equal to 1.25
- Kyverno is supported or installed on Kubernetes
- ASM or Istio is not installed or running on Kubernetes
$ kubectl apply -f namespace/occndbtier_kyvernopolicy_nonasm_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- Run the following command to upgrade service accounts, roles, and
role bindings (used for install, upgrade, and
rollback):
$ kubectl apply -f namespace/occndbtier_rbac_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- The database monitor service exposes metrics using the
/prometheus
endpoint. If Prometheus Operator is installed in the Kubernetes cluster and there is no existingServiceMonitor
for the/prometheus
endpoint, then run the following command to create aServiceMonitor
(Prom Operator CRD) with cnDBTier configuration. This enables Prometheus Operator to fetch metrics from the/prometheus
REST endpoint fordb-monitor-svc
.
where,$ kubectl apply -f namespace/occndbtier_service_monitor_${OCCNE_VERSION}.yaml -n ${OCCNE_INFRA_NAMESPACE}
${OCCNE_INFRA_NAMESPACE}
is the namespace where Prometheus Operator is installed.
Verifying and Creating Namespace in an OpenShift Environment
- Run the following command to create a
namespace:
$ oc create namespace ${OCCNE_NAMESPACE}
- Run the following command to create security context
constraints:
$ oc apply -f namespace/occndbtier_scc_openshift_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
- Run the following commands to create the required service
accounts, role, and role
bindings:
$ sed -i "s/occne-cndbtier/${OCCNE_NAMESPACE}/g" namespace/occndbtier_rbac_openshift_${OCCNE_VERSION}.yaml $ oc apply -f namespace/occndbtier_rbac_openshift_${OCCNE_VERSION}.yaml -n ${OCCNE_NAMESPACE}
Naming Convention for Namespace
- start and end with an alphanumeric character
- contain 63 characters or less
- contain only alphanumeric characters or '-'
Note:
Avoid using the prefixkube-
when creating a namespace as this prefix is reserved
for Kubernetes system namespaces.
2.3.1.2 Creating Storage Class
Note:
Depending upon the CSAR package type, thestorageclasses
directory can be found at either
/Artifacts/Scripts/
or /Scripts/
relative
path.
- If cnDBTier is being deployed on Baremetal CNE, then run the
following command to create the storage
class:
$ kubectl create -f storageclasses/occndbtier_storage_class_${OCCNE_VERSION}.yaml
- If cnDBTier is being deployed on vCNE with OpenStack, then run the
following command to create the storage
class:
$ kubectl create -f storageclasses/occndbtier_storage_class_openstack_${OCCNE_VERSION}.yaml
- If cnDBTier is being deployed with VMWare CNE, then run the
following command to create the storage
class:
$ kubectl create -f storageclasses/occndbtier_storage_class_vsphere_${OCCNE_VERSION}.yaml
- If cnDBTier is being deployed on the OCI, then run the following
command to create the storage
class:
$ kubectl create -f storageclasses/occndbtier_storage_class_oci_${OCCNE_VERSION}.yaml
2.3.1.3 Creating SSH Keys
Note:
Do not supply a passphrase when it asks for one. Just hit enter.$ mkdir -p -m 0700 /var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh
$ ssh-keygen -b 4096 -t rsa -C "cndbtier key" -f "/var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa" -q -N ""
Note:
Create SSH keys only once for the first site. Use the same SSH keys for all cnDBTier sites to create the SSH secrets as specified in the Creating Secrets section.2.3.1.4 Creating Secrets
This section describes the procedure to create secrets.
Note:
Kubernetes secrets are created to provide password inputs during cnDBTier installation. While creating passwords, you must select passwords that meet the following password complexity requirements:- Password length must be between 20 and 32 characters.
- Password must include at least one uppercase and one lowercase letter.
- Password must include at least one digit.
- All passwords, except the backup encryption password and TDE
filesystem encryption password, must include at least one special character from
,-/%~+.:_
. - The backup encryption password and TDE filesystem encryption
password must include at least one special character from
,-/~+.:_
. Note that the%
symbol is not supported in these passwords.
- Run the
occndbtier_mksecrets_${OCCNE_VERSION}.sh
script to create secrets for DB Users.Note:
- You can find the
occndbtier_mksecrets_${OCCNE_VERSION}.sh
script file at either/Artifacts/Scripts/
or/Scripts/
relative path depending on the CSAR package type. Replace the <relative path> in the following command with your corresponding relative path. - For the replication to work properly, ensure that the usernames, passwords, and encryption key provided in this procedure remain the same across all the georeplication cnDBTier sites.
$ cd /var/occne/cluster/${OCCNE_CLUSTER}/csar_occndbtier_${OCCNE_VERSION}/<relative path> $ ./occndbtier_mksecrets_${OCCNE_VERSION}.sh
Sample output:Using the following values: OCCNE_NAMESPACE=occne-cndbtier Using the following values: OCCNE_NAMESPACE=amit1 Enter new mysql root password: Enter new mysql root password again: Enter mysql user for monitor service (default: occneuser): Enter occneuser password: Enter occneuser password again: Enter mysql user for geo replication service (default: occnerepluser): Enter occnerepluser password: Enter occnerepluser password again: Enter backup encryption key or not (options: yes or no) (default: yes): yes Enter backup encryption password: Enter backup encryption password again: Enter yes if remote server transfer should be enabled or not (options: yes or no) (default: no): yes Enter remote server user name for secure transfer of backup to remote server remoteserveruser Enter if you want to enable encryption of username and password stored in db (options: yes or no) (default: no): yes Enter encryption key for encryption of password while saving in db: Enter encryption key for encryption of password while saving in db again: Enter if TDE filesystem encryption should be enabled or not (options: yes or no) (default: no): yes Enter TDE filesystem encryption password: Enter TDE filesystem encryption password again: creating secret for mysql root user... secret/occne-mysqlndb-root-secret created creating secret for mysql db monitor service user... secret/occne-secret-db-monitor-secret created creating secret for mysql db replication service user... secret/occne-replication-secret-db-replication-secret created creating secret for backup encryption key password... secret/occne-backup-encryption-secret created creating secret for remote server username... secret/occne-remote-server-username-secret created creating secret for encryption key... secret/cndbtier-mysql-encrypt-key created creating secret for TDE filesystem encryption key password... secret/occne-tde-encrypted-filesystem-secret created done
- You can find the
- Run the following commands on the Bastion Host to create secrets for SSH
private and public keys generated in Creating SSH Keys:
$ kubectl create secret generic cndbtier-ssh-private-key --from-file=id_rsa=/var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa -n ${OCCNE_NAMESPACE} $ kubectl create secret generic cndbtier-ssh-public-key --from-file=id_rsa.pub=/var/occne/cluster/${OCCNE_CLUSTER}/cndbtierssh/cndbtier_id_rsa.pub -n ${OCCNE_NAMESPACE}
- If you want to enable HTTPS mode, then create the following HTTPS
secrets. For more information about the https mode, see
"
/global/https/enable
" configuration in thecustom_values.yaml
file described in Global Parameters.Note:
cnDBTier supports usingp12
certificate forhttp
. If the files are in Privacy Enhanced Mail (PEM) format, then convert them intop12
format to enablehttps
in this step.Change working directory to /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts directories, where certificates has been kept or create cndbtiercerts directory and keep certificates in this cndbtiercerts directory. $ cd /var/occne/cluster/${OCCNE_CLUSTER}/cndbtiercerts $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-file --from-file=keystore=<certificate-name> $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-cred --from-literal="keyalias=replicationcertificate" --from-literal="keystoretype=<certificate-type>" --from-literal="keystorepassword=<password>"
For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-file --from-file=keystore=replicationcertificate.p12 $ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-https-cert-cred --from-literal="keyalias=replicationcertificate" --from-literal="keystoretype=PKCS12" --from-literal="keystorepassword=NextGenCne"
- If you want to enable password encryption, then create the following
secret which is used to configure the key that is used to encrypt the replication
username and password. For more information about enabling password encryption, see
"
"/global/encryption/enable"
" configuration in the values.yaml file described in Global Parameters.Note:
cnDBTier supports either disabling encryption across all the clusters or enabling encryption by using the same encryption key across all the clusters. If encryption is enabled and if you are creating a new cluster, ensure that the new cluster uses the same encryption key.$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-mysql-encrypt-key --from-literal="dbencryptkey=<encryption-key>"
For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-mysql-encrypt-key --from-literal="dbencryptkey=NextGenCne"
- Run the following command to create secret to encrypt the data backups
stored in the data nodes. For more information about enabling backup encryption, see
"
"/global/backupencryption/enable"
" configuration in the values.yaml file described in Global Parameters.Note:
Use the following command to create secret only if you are going to enable backup encryption in an existing cnDBTier cluster or ifoccne-backup-encryption-secret
secret is not created in step 1.$ kubectl -n ${OCCNE_NAMESPACE} create secret generic occne-backup-encryption-secret --from-literal="backup_encryption_password=<backup-encryption-password>"
For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic occne-backup-encryption-secret --from-literal="backup_encryption_password=NextGenCne"
- If secure backup transfer is enabled, perform the following steps to
create a secret for backup transfer to remote server:
Note:
For more information about enabling secure backup transfer, see"/global/remotetransfer/enable"
configuration described in Global Parameters.- Run the following command on Bastion Host to create the Private
Key secret for the remote
server:
$ kubectl create secret generic occne-remoteserver-privatekey-secret --from-file=id_rsa=<private key path with file name> -n <namespace of cnDBTier cluster>
For example:$ kubectl create secret generic occne-remoteserver-privatekey-secret --from-file=id_rsa=./remoteserver_id_rsa -n ${OCCNE_NAMESPACE}
- If you haven't created the user secret for the remote server in
step 1, then run the following command to create a secret for
the remote server
user:
$ kubectl -n ${OCCNE_NAMESPACE} create secret generic occne-remote-server-username-secret --from-literal="remote_server_user_name=<remote_server_user_name>"
For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic occne-remote-server-username-secret --from-literal="remote_server_user_name=backupserver"
- Run the following command on Bastion Host to create the Private
Key secret for the remote
server:
- Perform the following steps to create secrets for TLS certificates.
When TLS is enabled, the system uses this secret to configure the TLS certificates
that are used by the ndbmysqld pod for replication. For information about TLS
certificate creation, see Creating Certificates to Support TLS for Replication.
Note:
For information about the TLS configuration parameters, see the TLS (/global/tls) parameters in the Global Parameters section.- Create the
cndbtier-trust-store-secret
secret which stores the certificate authority (CA) certificate:Note:
If you have multiple CA certificates, then create a single file with all the CA certificates and use that file for creating thecndbtier-trust-store-secret
secret using the following command.$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-trust-store-secret --from-file=<ca_certificate>
where,
<ca_certificate>
is the name of the CA certificate file.For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-trust-store-secret --from-file=combine-ca.pem
- Create the
cndbtier-server-secret
secret which stores the server certificate and server certificate key:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-server-secret --from-file=<server_certificate> --from-file=<server_certificate_key>
where,<server_certificate>
is the name of the server certificate file<server_certificate_key>
is the name of the server certificate key file
For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-server-secret --from-file=server-cert.pem --from-file=server-key.pem
- Create the
cndbtier-client-secret
secret which stores the client certificate and client certificate key:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-client-secret --from-file=<client_certificate> --from-file=<client_certificate_key>
where,<client_certificate>
is the name of the client certificate file<client_certificate_key>
is the name of the client certificate key file
For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic cndbtier-client-secret --from-file=client-cert.pem --from-file=client-key.pem
- Create the
- If you want to enable Transparent Data Encryption (TDE), then create the following
secret to configure the password which is used to encrypt the data files stored in
the data nodes. For more information about enabling TDE, see
"
/global/ndb/EncryptedFileSystem
" configuration in thecustom_values.yaml
file described in the Global Parameters section.$ kubectl -n ${OCCNE_NAMESPACE} create secret generic occne-tde-encrypted-filesystem-secret --from-literal="filesystem-password=<tde-encryption-password>"
For example:$ kubectl -n ${OCCNE_NAMESPACE} create secret generic occne-tde-encrypted-filesystem-secret --from-literal="filesystem-password=SamplePassword"
2.3.1.5 Configuring Cloud Native Load Balancer (CNLB)
This section provides information about configuring CNLB network segregation annotations for communication between replication channels and between replication service pods.
Prerequisites
Before configuring the CNLB network segregation annotations, ensure that you meet the following prerequisites:- cnDBTier must be deployed on CNE where CNLB is enabled.
- All CNLB annotations or network attachments required for cnDBTier must be generated before installing cnDBTier as they are used to support traffic segregation. For more information about generating CNLB annotations or network attachments, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
- The CNE CNLB manager (
cnlb.ini
) must be preconfigured with the appropriate cnDBTier source and destination subnet addresses. For more information about ingress and egress external communication routes see, the "Ingress and Egress Communication Over External IPs" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.
2.3.1.5.1 Configuring CNLB for Primary and Secondary Replication Channels
This section provides the steps to configure traffic segregation for primary and secondary replication channels.
- If cnDBTier is deployed with two or more clusters and traffic
segregation is required for primary and secondary (active and standby) replication
channels, configure the following annotations for
ndbmysql
pods in theannotation
section of thecustom_values.yaml
file. These annotations attach different CNLB Load Balancer IPs to variousndbmysqld
statefulset replica pods, which are to be used for setting up replication channels across clusters.api: annotations: - k8s.v1.cni.cncf.io/networks: [<cnlb ingress/egress annotation>,...] - oracle.com.cnc/InstanceSvcIpSupport : "true" - oracle.com.cnc/cnlb: '[{"backendPortName":"ndbmysqld","cnlbIp":"[<cnlb ingress/egress annotation>/<cnlb loadbalancer ip>,...]","cnlbPort":"3306"}]' - oracle.com.cnc/cnlbSetName: serviceIpSet0
Note:
- The number of CNLB network attachments provided in
k8s.v1.cni.cncf.io/networks
must be less than or equal to the replica count of thendbmysqld
statefulset. - You can use the following names for
oracle.com.cnc/cnlbSetName
:- cluster 1:
serviceIpSet0
- cluster 2:
serviceIpSet1
- cluster 1:
The following examples provide sample annotation configurations for a two-cluster and three-cluster cnDBTier setups:- Sample CNLB annotations for a two-cluster cnDBTier setup
where traffic segregation are configured for active and standby
replication
channels:
api: annotations: - k8s.v1.cni.cncf.io/networks: default/nf-sig1-ie15@nf-sig1-ie15,default/nf-sig2-ie15@nf-sig2-ie15 - oracle.com.cnc/InstanceSvcIpSupport : "true" - oracle.com.cnc/cnlb: '[{"backendPortName":"ndbmysqld","cnlbIp":"nf-sig1-ie15/10.123.155.27,nf-sig2-ie15/10.123.155.44","cnlbPort":"3306"}]' - oracle.com.cnc/cnlbSetName: serviceIpSet0
In this example, the CNLB IP "10.123.155.27" is associated with
ndbmysqld-0
and IP "10.123.155.44" is associated withndbmysqld-1
. - Sample CNLB annotations for a three-cluster cnDBTier setup
where traffic segregation are configured for active and standby
replication
channels:
api: annotations: - k8s.v1.cni.cncf.io/networks: default/nf-sig1-ie15@nf-sig1-ie15,default/nf-sig2-ie15@nf-sig2-ie15 - oracle.com.cnc/InstanceSvcIpSupport : "true" - oracle.com.cnc/cnlb: '[{"backendPortName":"ndbmysqld","cnlbIp":"nf-sig1-ie15/10.123.155.27,nf-sig2-ie15/10.123.155.44,nf-sig1-ie15/10.123.155.28,nf-sig2-ie15/10.123.155.45","cnlbPort":"3306"}]' - oracle.com.cnc/cnlbSetName: serviceIpSet0
In this example, the CNLB IP "10.123.155.27" is associated with
ndbmysqld-0
, IP "10.123.155.44" is associated withndbmysqld-1
, and IP "10.123.155.28" is associated withndbmysqld-2
.
- The number of CNLB network attachments provided in
- When the CNLB annotations are in place for the
ndbmysqld
statefulset, configure thedb-replication-svc
to use the CNLB IPs to set up replication channels:db-replication-svc: dbreplsvcdeployments: - name: <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc mysql: primarysignalhost: "<cnlb_loadbalancer_ip for primary_sql_signal_host_ip>" secondarysignalhost: "<cnlb_loadbalancer_ip for secondary_sql_signal_host_ip>" enableInitContainerForIpDiscovery: false
Note:
enableInitContainerForIpDiscovery
must be set to false. Otherwise, thedb-replication-svc
`init-sidecar replaces the configured IP addresses forprimarysignalhost
orsecondarysignalhost
during the pod startup.For example,db-replication-svc: dbreplsvcdeployments: - name: <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc mysql: primarysignalhost: "10.123.155.27" secondarysignalhost: "10.123.155.44" enableInitContainerForIpDiscovery: false
This example uses the configured CNLB IPs. The CNLB IP associated with
ndbmysqld-0
is configured inprimarysignalhost
, and the CNLB IP associated withndbmysqld-1
is configured insecondarysignalhost
.
2.3.1.5.2 Configuring CNLB for Communication Between Local and Remote Cluster Replication Pods
This section provides details about configuring CNLB for communication between local and remote replication pods over a separate network.
- Add CNLB ingress or egress annotations to
db-replication-svc
pods:db-replication-svc: dbreplsvcdeployments: - name: <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc podAnnotations: k8s.v1.cni.cncf.io/networks: <cnlb ingress/egress annotation> oracle.com.cnc/cnlbSetName: serviceIpSet0 oracle.com.cnc/cnlb: '[{"backendPortName":"http","cnlbIp":"<cnlb loadbalancer ip>","cnlbPort":"<cnlb loadbalancer port>"},{"backendPortName":"sftp","cnlbIp":"<cnlb loadbalancer ip>","cnlbPort":"2022"}]'
For example:db-replication-svc: dbreplsvcdeployments: - name: cluster1-cluster2-replication-svc podAnnotations: k8s.v1.cni.cncf.io/networks: default/nf-sig1-ie13@nf-sig1-ie13 oracle.com.cnc/cnlbSetName: serviceIpSet0 oracle.com.cnc/cnlb: '[{"backendPortName":"http","cnlbIp":"10.123.155.25","cnlbPort":"8080"},{"backendPortName":"sftp","cnlbIp":"10.123.155.25","cnlbPort":"2022"}]'
- Use the CNLB IPs (
<cnlb load balancer IP>
) configured in the previous step to configurelocalsiteip
for the current cluster andremotesiteip
for the remote cluster in their respectivecustom_values.yaml
file.Note:
You can use the following names fororacle.com.cnc/cnlbSetName
:- cluster 1:
serviceIpSet0
- cluster 2:
serviceIpSet1
Configuration for current cluster:db-replication-svc: dbreplsvcdeployments: - name: <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc replication: localsiteip: "<cnlb loadbalancer ip>"
For example:db-replication-svc: dbreplsvcdeployments: - name: cluster1-cluster2-replication-svc replication: localsiteip: "10.123.155.25"
Configuration for remote cluster (mate cluster):db-replication-svc: dbreplsvcdeployments: - name: <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc replication: remotesiteip: "<cnlb loadbalancer ip>" remotesiteport: "<cnlb loadbalancer port>"
For example:db-replication-svc: dbreplsvcdeployments: - name: cluster2-cluster1-replication-svc replication: remotesiteip: "10.123.155.25" remotesiteport: "8080"
- cluster 1:
2.3.1.6 Multus Support
This section details the procedure to make Multus CNI as a container network interface plugin for Kubernetes and attach multiple network interfaces to the pods.
In Kubernetes, other that the local loopback, each pod has only one network interface (eth0) by default. Therefore, all the traffic goes only through the eth0 interface. You can attach multiple network interfaces to the pods by using Multus CNI as a container network interface plugin for Kubernetes by performing the following tasks.
- The multus CNI and whereabouts plugins must be installed in the Kubernetes cluster.
- The Kubernetes cluster must have multiple network interfaces attached on the worker and master nodes.
2.3.1.6.1 Configuring Network Attachment Definition
This section describes the steps to create network attachment definitions.
- Create appropriate network attachment definitions for the namespace
where cnDBTier is deployed by referring to the sample Network
Attachment Definition template given
below:
# Network Attachment Defination Template apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <name> spec: config: '{ "cniVersion": "0.3.1", "type": "macvlan", "master": "<network interface>", "mode": "<mode>", "ipam": { "type": "whereabouts", "range": "<range>", "range_start": "<start_range>", "range_end": "<end_range>" } }' # Example apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf-sitea-eth2 spec: config: '{ "cniVersion": "0.3.1", "type": "macvlan", "master": "eth2", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "192.168.4.0/24", "range_start": "192.168.4.10", "range_end": "192.168.4.20" } }'
2.3.1.6.2 Enabling Global Configuration for Multus
This section provides details about enabling global configuration for Multus.
- Enable Multus global configuration and create the required service
account for Multus by referring to the following sample
configuration.
Note:
Multus service account must be created for Multus support as cnDBTier do not work without the Multus service account when the Multus feature is enabled.global: multus: enable: true serviceAccount: create: true name: "cndbtier-multus-serviceaccount"
2.3.1.6.3 Configuring SQL Pods
This section describes the procedure to configure SQL pods for attaching multiple network interfaces.
To configure SQL pods for attaching multiple network interfaces:
- Add appropriate annotations for the cnDBTier
ndbmysqld
andndbappmysqld
pods in thecustom_values.yaml
file, by referring to the following sample. Skip this step if you do not want to configure the MySQL pods to support multiple network attachments.Note:
If you want to add "n" networks to the SQL pods, provide all the "n" annotations together in a comma-separated format as seen in the following example.# Adding NAD annotation for both ndbmysqld and ndbappmysqld pods api: annotations: - k8s.v1.cni.cncf.io/networks: [<network_attachment_defination_name_1>, <network_attachment_defination_name_2> ....] api: ndbapp: annotations: - k8s.v1.cni.cncf.io/networks: [<network_attachment_defination_name_1>, <network_attachment_defination_name_2> ....] # Example: api: annotations: - k8s.v1.cni.cncf.io/networks: macvlan-conf-sitea-eth1, macvlan-conf-sitea-eth2, macvlan-conf-sitea-eth3 api: ndbapp: annotations: - k8s.v1.cni.cncf.io/networks: macvlan-conf-sitea-eth1, macvlan-conf-sitea-eth2, macvlan-conf-sitea-eth3
2.3.1.6.4 Configuring Replication Services
This section describes the procedure to configure the replication services for enabling communication between the local and remote site replication service pods.
Note:
You can skip these steps if you want use the load balancer IP to establish the communication.To enable communication between the local and remote replication service over Multus IP:
- Provide the annotations for the replication service (replication svc)
deployments by referring to the following sample.
Note:
You must provide pod annotations for each replication service deployment individually. There is no global pod annotation that is applicable to all the replication service deployments.db-replication-svc: dbreplsvcdeployments: - name: sitea-siteb-replication-svc podAnnotations: k8s.v1.cni.cncf.io/networks: "<network_attachment_defination_name>"
- After adding pod annotations to a replication deployment, configure the
same deployment object for Multus support using the following
configuration:
db-replication-svc: dbreplsvcdeployments: - name: sitea-siteb-replication-svc enabled: true multus: enable: true networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" #Please provide the same name as you given to the pod annotation in the above code block. networkAttachmentDefinationTagName: "<network_attachment_defination_name>"
- Local and remote site port configuration for Multus: Change the
localsiteport
andremotesiteport
ports to8080
in the same replication deployment where you configured Multus support in the previous step:db-replication-svc: dbreplsvcdeployments: - name: sitea-siteb-replication-svc replication: localsiteport: "8080" remotesiteport: "8080"
2.3.1.6.5 Configuring Connectivity Services
This section describes the procedure to configure the connectivity services.
Note:
You can skip this task if you skipped any of the following tasks previously: Configuring SQL Pods, Configuring Replication SVC- Configure the connectivity service by referring to the following
sample:
Note:
The value ofnetworkAttachmentDefinationTagName
must be one of the annotation values that is given while Configuring SQL Pods. ThenetworkAttachmentDefinationTagName
that is provided to the connectivity service (connectivity svc) is used as the network interface for NF traffic towards the MySQL pods.api: connectivityService: name: mysql-connectivity-service multus: enable: true networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "<network_attachment_defination_name>"
2.3.1.6.6 Configuring Primary and Secondary Replication Channels
This section describes the procedure to configure the primary and secondary replication channels.
Note:
Skip this step if you skipped the Configuring SQL Pods task or if you do not wish to enable Multus for the primary and secondary replication channels.- Configure the primary and secondary signalling IPs for
georeplication in the
replication svc values.yaml
file. It is not mandatory to configure both the primary and secondary replication channels with Multus. You can configure only the active replication channel with Multus and leave the standby replication channel to be configured with load balancer IP or vice versa.Note:
ThenetworkAttachmentDefinationTagName
name must be one of the network attachment definition that you provided to thendbmysqld
orndbappmysqld
pods while Configuring SQL Pods.db-replication-svc: dbreplsvcdeployments: # if pod prefix is given then use the unique smaller name for this db replication service. - name: sitea-siteb-replication-svc mysql: primarysignalhostmultusconfig: multusEnabled: true networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "<network_attachment_defination_name>" secondarysignalhostmultusconfig: multusEnabled: true networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "<network_attachment_defination_name>"
2.3.1.7 Configuring Multiple Replication Channel Groups
This section describes the procedure to enable and configure multiple replication channel groups where each replication channel group is used for replicating databases and tables to its remote site.
- Use the
values.yaml
file to configure multiple replication channel groups for enabling and configuring the databases and tables of each replication channel group to replicate the data with remote sites:Note:
Each configuration in this step provides examples to configure two and three replication channel groups. If there are more replication channel groups, extend the configurations accordingly.- Configure the number of replication SQL pods (that is,
apiReplicaCount
) in thevalues.yaml
file depending on the number of remote sites configured.Note:
- For two replication channel groups:
- set
apiReplicaCount
to 4 for a two site georeplication setup with two replication channel groups. - set
apiReplicaCount
to 8 for a three site georeplication setup with two replication channel groups. - set
apiReplicaCount
to 12 for a four site georeplication setup with two replication channel groups.
- set
- For three replication channel groups:
- set
apiReplicaCount
to 6 for a two site georeplication setup with three replication channel groups. - set
apiReplicaCount
to 12 for a three site georeplication setup with three replication channel groups. - set
apiReplicaCount
to 18 for a four site georeplication setup with three replication channel groups.
- set
Example to configure
apiReplicaCount
for a two site georeplication setup with two replication channel groups:$ vi values.yaml mgmReplicaCount: 2 ndbReplicaCount: 4 apiReplicaCount: 4 ndbappReplicaCount: 2
Example to configureapiReplicaCount
for a two site georeplication setup with three replication channel groups:$ vi values.yaml mgmReplicaCount: 2 ndbReplicaCount: 4 apiReplicaCount: 6 ndbappReplicaCount: 2
- For two replication channel groups:
- Configure
startNodeId
for the ndbappmysqld pods. By defaultstartNodeId
is set to70
. If more replication SQL nodes (apiReplicaCount
) are required for configuring multiple replication channel groups with multiple remote sites, then provide the appropriate value, such that the node IDs for replication SQL pods (ndbmysqld) and non replication SQL pods (ndbappmysqld) do not overlap:Example to configure startNodeId for ndbappmysqld pods with two replication channel groups in two, three and four site GR setup:
ndbapp: ndbdisksize: 20Gi port: 3306 ndb_cluster_connection_pool: 1 ndb_cluster_connection_pool_base_nodeid: 100 startNodeId: 70
Example to configure startNodeId for ndbappmysqld pods with three replication channel groups in two, three and four site GR setup:ndbapp: ndbdisksize: 20Gi port: 3306 ndb_cluster_connection_pool: 1 ndb_cluster_connection_pool_base_nodeid: 100 startNodeId: 80
Note:
The node IDs of ndbappmysqld pods must not overlap with the node IDs of ndbmysqld pods. Therefore, configure thestartNodeId
values accordingly. - If fixed LoadBalancer IP is used along with annotation and labels, then
configure the LoadBalancer in the following manner.
Example to configure fixed LoadBalancer IP along with annotation and labels for two replication channel groups:
sqlgeorepsvclabels: - name: ndbmysqldsvc-0 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode0 - cis.f5.com/as3-pool: svc_occne_infra_pool0 - name: ndbmysqldsvc-1 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode1 - cis.f5.com/as3-pool: svc_occne_infra_pool1 - name: ndbmysqldsvc-2 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode2 - cis.f5.com/as3-pool: svc_occne_infra_pool2 - name: ndbmysqldsvc-3 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode3 - cis.f5.com/as3-pool: svc_occne_infra_pool3
Example to configure fixed LoadBalancer IP along with annotation and labels for three replication channel groups:sqlgeorepsvclabels: - name: ndbmysqldsvc-0 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode0 - cis.f5.com/as3-pool: svc_occne_infra_pool0 - name: ndbmysqldsvc-1 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode1 - cis.f5.com/as3-pool: svc_occne_infra_pool1 - name: ndbmysqldsvc-2 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode2 - cis.f5.com/as3-pool: svc_occne_infra_pool2 - name: ndbmysqldsvc-3 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode3 - cis.f5.com/as3-pool: svc_occne_infra_pool3 - name: ndbmysqldsvc-4 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode4 - cis.f5.com/as3-pool: svc_occne_infra_pool4 - name: ndbmysqldsvc-5 loadBalancerIP: "" annotations: {} labels: - app: occne_infra - cis.f5.com/as3-tenant: occne_infra - cis.f5.com/as3-app: svc_occne_infra_sqlnode5 - cis.f5.com/as3-pool: svc_occne_infra_pool5
- Enable the multiple replication channel groups flag:
Example for two replication channel groups:
multiplereplicationgroups: enabled: true replicationchannelgroups: - channelgroupid: 1 binlogdodb: {} binlogignoredb: {} binlogignoretables: {} sqllist: {} - channelgroupid: 2 binlogdodb: {} binlogignoredb: {} binlogignoretables: {} sqllist: {}
Example for three replication channel groups:multiplereplicationgroups: enabled: true replicationchannelgroups: - channelgroupid: 1 binlogdodb: {} binlogignoredb: {} binlogignoretables: {} sqllist: {} - channelgroupid: 2 binlogdodb: {} binlogignoredb: {} binlogignoretables: {} sqllist: {} - channelgroupid: 3 binlogdodb: {} binlogignoredb: {} binlogignoretables: {} sqllist: {}
- For each replication channel group, configure the databases and tables to
replicate a set of databases to remote site:
Example for two replication channel groups
Note:
- Configure the databases
db1 and db2
in the first replication channel group and databasesdb3 and db4
in the second replication channel group. - Configure the database
db5
asbinlogdodb
for both the replication channel groups and then split the tables across both the replication channel groups inbinlogignore
tables. - Ensure that the same database name is not configured in the
binlogdodb
andbinlogignoredb
configurations in both the replication channel groups.
multiplereplicationgroups: enabled: true replicationchannelgroups: - channelgroupid: 1 binlogdodb: - db1 - db2 binlogignoredb: - db3 - db4 sqllist: {} - channelgroupid: 2 binlogdodb: - db3 - db4 binlogignoredb: - db1 - db2 sqllist: {}
Example for three replication channel groupsNote:
- Configure the databases
db1 and db2
in the first replication channel group, databasesdb3 and db4
in the second replication channel group, and databasesdb6 and db7
in the third replication channel group. - Configure the database
db5
asbinlogdodb
for all the replication channel groups and then split the tables across the replication channel groups in thebinlogignore
tables. - Ensure that the same database name is not configured in the
binlogdodb
andbinlogignoredb
configurations in for any of the replication channel groups.
multiplereplicationgroups: enabled: true replicationchannelgroups: - channelgroupid: 1 binlogdodb: - db1 - db2 - db5 binlogignoredb: - db3 - db4 - db6 - db7 binlogignoretables: - db5.table3 - db5.table4 sqllist: {} - channelgroupid: 2 binlogdodb: - db3 - db4 - db5 binlogignoredb: - db1 - db2 - db6 - db7 binlogignoretables: - db5.table1 - db5.table2 sqllist: {} - channelgroupid: 3 binlogdodb: - db6 - db7 binlogignoredb: - db1 - db2 - db3 - db4 - db5 binlogignoretables: {} sqllist: {}
- Configure the databases
- By default,
sqllist
for each replication channel group is configured with the default SQL nodes used for replication between remote sites. If you configure different SQL nodes for the replication service pods, thensqllist
must be updated in themultiplereplicationgroups
configurations:Example for two replication channel groups:multiplereplicationgroups: enabled: true replicationchannelgroups: - channelgroupid: 1 binlogdodb: - db1 - db2 - db5 binlogignoredb: - db3 - db4 binlogignoretables: - db5.table3 - db5.table4 sqllist: - ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-4.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-5.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-8.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-9.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - channelgroupid: 2 binlogdodb: - db3 - db4 - db5 binlogignoredb: - db1 - db2 binlogignoretables: - db5.table1 - db5.table2 sqllist: - ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-6.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-7.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-10.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-11.ndbmysqldsvc.occne-cndbtier.svc.cluster.local
Example for three replication channel groups:multiplereplicationgroups: enabled: true replicationchannelgroups: - channelgroupid: 1 binlogdodb: - db1 - db2 - db5 binlogignoredb: - db3 - db4 - db6 - db7 binlogignoretables: - db5.table3 - db5.table4 sqllist: - ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-6.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-7.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-12.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-13.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - channelgroupid: 2 binlogdodb: - db3 - db4 - db5 binlogignoredb: - db1 - db2 - db6 - db7 binlogignoretables: - db5.table1 - db5.table2 sqllist: - ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-8.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-9.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-14.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-15.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - channelgroupid: 3 binlogdodb: - db6 - db7 binlogignoredb: - db1 - db2 - db3 - db4 - db5 binlogignoretables: sqllist: - ndbmysqld-4.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-5.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-10.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-11.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-16.ndbmysqldsvc.occne-cndbtier.svc.cluster.local - ndbmysqld-17.ndbmysqldsvc.occne-cndbtier.svc.cluster.local
Note:
Ensure that the same database name is not configured in the binlogdodb and binlogignoredb configurations for any of the replication channel groups.
- Configure the number of replication SQL pods (that is,
- Use the following sample code to configure replication services for each
mate site for multiple replication channel groups.
Each replication channel group is configured and monitored using separate replication service pod. Therefore, two replication service pods are required to configure and monitor two replication channel groups where each replication channel group has one ACTIVE and one STANDBY replication channel.
Note:
This step provides examples to configure two and three replication channel groups. If there are more replication channel groups. then extend the configurations accordingly.Example for two replication channel groups:
Similarly, two replication service pods are used for the replication of data using multiple replication channel groups with each remote site.db-replication-svc: enabled: true dbreplsvcdeployments: # if pod prefix is given then use the unique smaller name for this db replication service. - name: cluster1-cluster2-repl-grp1 enabled: true mysql: dbtierservice: "mysql-connectivity-service" dbtierreplservice: "ndbmysqldsvc" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-0 LoadBalancer service primaryhost: "ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" port: "3306" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-0 LoadBalancer service primarysignalhost: "<primary_sql_signal_host_ip_grp1>" primarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured primaryhostserverid: "<primary_host_server_id_grp1>" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-1 LoadBalancer service secondaryhost: "ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-1 LoadBalancer service secondarysignalhost: "<secondary_sql_signal_host_ip_grp1>" secondarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured secondaryhostserverid: "<secondary_host_server_id_grp1>" replication: # Local site replication service LoadBalancer ip can be configured. localsiteip: "" localsiteport: "80" channelgroupid: "1" matesitename: "cluster2" # if cndbtier site1 is installing, use "" # else if cndbtier site2 is installing, use EXTERNAL-IP from site1 ${OCCNE_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site3 is installing, use EXTERNAL-IP from site1 ${OCCNE_SECOND_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site4 is installing, use EXTERNAL-IP from site1 ${OCCNE_THIRD_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # NOTE: if using and IPv6, enclose IPv6 address in square brackets, like this: "[2606:b400:605:b819:4631:92ff:fe73:9dc1]" remotesiteip: "<${OCCNE_MATE_REPLICATION_SVC_GRP_1}>" remotesiteport: "80" - name: cluster1-cluster2-repl-grp2 enabled: true mysql: dbtierservice: "mysql-connectivity-service" dbtierreplservice: "ndbmysqldsvc" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-0 LoadBalancer service primaryhost: "ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" port: "3306" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-0 LoadBalancer service primarysignalhost: "<primary_sql_signal_host_ip_grp2>" primarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured primaryhostserverid: "<primary_host_server_id_grp2>" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-1 LoadBalancer service secondaryhost: "ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-1 LoadBalancer service secondarysignalhost: "<secondary_sql_signal_host_ip_grp2>" secondarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured secondaryhostserverid: "<secondary_host_server_id_grp2>" replication: # Local site replication service LoadBalancer ip can be configured. localsiteip: "" localsiteport: "80" channelgroupid: "2" matesitename: "cluster2" # if cndbtier site1 is installing, use "" # else if cndbtier site2 is installing, use EXTERNAL-IP from site1 ${OCCNE_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site3 is installing, use EXTERNAL-IP from site1 ${OCCNE_SECOND_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site4 is installing, use EXTERNAL-IP from site1 ${OCCNE_THIRD_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # NOTE: if using and IPv6, enclose IPv6 address in square brackets, like this: "[2606:b400:605:b819:4631:92ff:fe73:9dc1]" remotesiteip: "<${OCCNE_MATE_REPLICATION_SVC_GRP_2}>" remotesiteport: "80"
For example:- Two replication service pods are required for replicating the data with one remote site in a two site georeplication setup.
- Four replication service pods are required for replicating the data with two remote sites in a three site georeplication setup.
- Six replication service pods are required for replicating the data with three remote sites in a four site georeplication setup.
Example for three replication channel groups:db-replication-svc: enabled: true dbreplsvcdeployments: # if pod prefix is given then use the unique smaller name for this db replication service. - name: cluster1-cluster2-repl-grp1 enabled: true mysql: dbtierservice: "mysql-connectivity-service" dbtierreplservice: "ndbmysqldsvc" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-0 LoadBalancer service primaryhost: "ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" port: "3306" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-0 LoadBalancer service primarysignalhost: "<primary_sql_signal_host_ip_grp1>" primarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured primaryhostserverid: "<primary_host_server_id_grp1>" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-1 LoadBalancer service secondaryhost: "ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-1 LoadBalancer service secondarysignalhost: "<secondary_sql_signal_host_ip_grp1>" secondarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured secondaryhostserverid: "<secondary_host_server_id_grp1>" replication: # Local site replication service LoadBalancer ip can be configured. localsiteip: "" localsiteport: "80" channelgroupid: "1" matesitename: "cluster2" # if cndbtier site1 is installing, use "" # else if cndbtier site2 is installing, use EXTERNAL-IP from site1 ${OCCNE_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site3 is installing, use EXTERNAL-IP from site1 ${OCCNE_SECOND_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site4 is installing, use EXTERNAL-IP from site1 ${OCCNE_THIRD_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # NOTE: if using and IPv6, enclose IPv6 address in square brackets, like this: "[2606:b400:605:b819:4631:92ff:fe73:9dc1]" remotesiteip: "<${OCCNE_MATE_REPLICATION_SVC_GRP_1}>" remotesiteport: "80" - name: cluster1-cluster2-repl-grp2 enabled: true mysql: dbtierservice: "mysql-connectivity-service" dbtierreplservice: "ndbmysqldsvc" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-0 LoadBalancer service primaryhost: "ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" port: "3306" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-0 LoadBalancer service primarysignalhost: "<primary_sql_signal_host_ip_grp2>" primarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured primaryhostserverid: "<primary_host_server_id_grp2>" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-1 LoadBalancer service secondaryhost: "ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-1 LoadBalancer service secondarysignalhost: "<secondary_sql_signal_host_ip_grp2>" secondarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured secondaryhostserverid: "<secondary_host_server_id_grp2>" replication: # Local site replication service LoadBalancer ip can be configured. localsiteip: "" localsiteport: "80" channelgroupid: "2" matesitename: "cluster2" # if cndbtier site1 is installing, use "" # else if cndbtier site2 is installing, use EXTERNAL-IP from site1 ${OCCNE_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site3 is installing, use EXTERNAL-IP from site1 ${OCCNE_SECOND_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site4 is installing, use EXTERNAL-IP from site1 ${OCCNE_THIRD_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # NOTE: if using and IPv6, enclose IPv6 address in square brackets, like this: "[2606:b400:605:b819:4631:92ff:fe73:9dc1]" remotesiteip: "<${OCCNE_MATE_REPLICATION_SVC_GRP_2}>" remotesiteport: "80" - name: cluster1-cluster2-repl-grp3 enabled: true mysql: dbtierservice: "mysql-connectivity-service" dbtierreplservice: "ndbmysqldsvc" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-0 LoadBalancer service primaryhost: "ndbmysqld-4.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" port: "3306" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-0 LoadBalancer service primarysignalhost: "<primary_sql_signal_host_ip_grp3>" primarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured primaryhostserverid: "<primary_host_server_id_grp3>" # if cndbtier, use CLUSTER-IP from the ndbmysqldsvc-1 LoadBalancer service secondaryhost: "ndbmysqld-5.ndbmysqldsvc.occne-cndbtier.svc.cluster.local" # if cndbtier, use EXTERNAL-IP from the ndbmysqldsvc-1 LoadBalancer service secondarysignalhost: "<secondary_sql_signal_host_ip_grp3>" secondarysignalhostmultusconfig: multusEnabled: <multus_enable> networkAttachmentDefinationApiName: "k8s.v1.cni.cncf.io" networkAttachmentDefinationTagName: "" # serverid is unique; retrieve it for the site being configured secondaryhostserverid: "<secondary_host_server_id_grp3>" replication: # Local site replication service LoadBalancer ip can be configured. localsiteip: "" localsiteport: "80" channelgroupid: "3" matesitename: "cluster2" # if cndbtier site1 is installing, use "" # else if cndbtier site2 is installing, use EXTERNAL-IP from site1 ${OCCNE_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site3 is installing, use EXTERNAL-IP from site1 ${OCCNE_SECOND_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # else if cndbtier site4 is installing, use EXTERNAL-IP from site1 ${OCCNE_THIRD_MATE_SITE_NAME}-${OCCNE_SITE_NAME}-replication-svc LoadBalancer service # NOTE: if using and IPv6, enclose IPv6 address in square brackets, like this: "[2606:b400:605:b819:4631:92ff:fe73:9dc1]" remotesiteip: "<${OCCNE_MATE_REPLICATION_SVC_GRP_3}>" remotesiteport: "80"
Similarly, three replication service pods are used for the replication of data using multiple replication channel groups with each remote site.
For example:- Three replication service pods are required for replicating the data with its one remote site in a two site georeplication setup.
- Six replication service pods are required for replicating the data with its two remote sites in a three site georeplication setup.
- Nine replication service pods are required for replicating the data with its three remote sites in a four site georeplication setup.
Note:
- cnDBTier multisite deployments do not support enabling multiple replication channel groups on one site and disabling multiple replication channel groups on the other.
- When multiple replication channel groups are enabled and configured, you must create each NF user in each cnDBTier cluster in the multi-cluster deployments.
2.3.2 Installation Task
This section describes the procedure to install cnDBTier.
Note:
- Before installing cnDBTier, ensure that you complete the Prerequisites and Preinstallation Tasks.
- cnDBTier provides a default custom values file
(
occndbtier_custom_values_${OCCNE_VERSION}.yaml
) as part of the cnDBTier CSAR package. This file has the default configurations that can be changed as per customer requirements. Depending on the CSAR package type, the defaultoccndbtier_custom_values_${OCCNE_VERSION}.yaml
file is available in one of the following locations:- "/Artifacts/Scripts/"
- "/Scripts/"
occndbtier_custom_values_${OCCNE_VERSION}.yaml
file as per their traffic and performance requirement and provides the customized cnDBTieroccndbtier_custom_values_${OCCNE_VERSION}.yaml
file as part of their CSAR package. Ensure that you use the NF-specificoccndbtier_custom_values_${OCCNE_VERSION}.yaml
file to install cnDBTier. - Retain the value of the replication service ports as "80" in the custom values file and ensure that you don't change it.
- Disable the
LockPagesInMainMemory
parameter in the custom values file as the cnDBTier pods run in non-root user mode.
- Run the following command to copy the NF-specific
occndbtier_custom_values_${OCCNE_VERSION}.yaml
file to/var/occne/cluster/${OCCNE_CLUSTER}/occndbtier/custom_values.yaml
:$ cp occndbtier_custom_values_${OCCNE_VERSION}.yaml /var/occne/cluster/${OCCNE_CLUSTER}/occndbtier/custom_values.yaml
- Update the
custom_values.yaml
file if required:vi /var/occne/cluster/${OCCNE_CLUSTER}/occndbtier/custom_values.yaml
For information about the frequently used configuration parameters, see Customizing cnDBTier. For detailed list of MySQL configuration parameters, see MySQL Documentation.
- Navigate to root directory of
cnDBTier:
$ cd /var/occne/cluster/${OCCNE_CLUSTER}/
- Run Helm install command to install
cnDBTier:
$ helm install mysql-cluster --namespace ${OCCNE_NAMESPACE} occndbtier -f occndbtier/custom_values.yaml
Sample output after successful installation:NAME: mysql-cluster LAST DEPLOYED: Mon May 20 10:22:58 2024 NAMESPACE: occne-cndbtier STATUS: deployed REVISION: 1
If the Helm installation fails, raise service request, see MOS.
- Run the postinstallation script, only if there is no Upgrade Service
Account configured for cnDBTier. The postinstallation script patches
upgradeStrategy
.Note:
Replace the values of the environment variables in the following commands with the values of your cluster.export OCCNE_NAMESPACE="occne-cndbtier" export NDB_STS_NAME="ndbmtd" export API_STS_NAME="ndbmysqld" export APP_STS_NAME="ndbappmysqld" occndbtier/files/hooks.sh --post-install
Note:
db-backup-manager-svc
is designed to automatically restart in case of errors. Therefore, when thebackup-manager-svc
encounters a temporary error during the installation process, it may undergo several restarts. When cnDBTier reaches a stable state, thedb-backup-manager-svc
pod operates normally without any further restarts. - Run the following command to verify if the cnDBTier is installed
successfully:
$ kubectl -n ${OCCNE_NAMESPACE} exec ndbmgmd-0 -- ndb_mgm -e show
Sample output:Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=1 @10.233.73.51 (mysql-8.4.2 ndb-8.4.2, Nodegroup: 0) id=2 @10.233.74.56 (mysql-8.4.2 ndb-8.4.2, Nodegroup: 0, *) [ndb_mgmd(MGM)] 2 node(s) id=49 @10.233.74.55 (mysql-8.4.2 ndb-8.4.2) id=50 @10.233.84.60 (mysql-8.4.2 ndb-8.4.2) [mysqld(API)] 10 node(s) id=56 @10.233.84.59 (mysql-8.4.2 ndb-8.4.2) id=57 @10.233.78.63 (mysql-8.4.2 ndb-8.4.2) id=70 @10.233.78.62 (mysql-8.4.2 ndb-8.4.2) id=71 @10.233.73.53 (mysql-8.4.2 ndb-8.4.2) id=72 (not connected, accepting connect from ndbappmysqld-2.ndbappmysqldsvc.samar1.svc.occne1-arjun-sreenivasalu) id=73 (not connected, accepting connect from ndbappmysqld-3.ndbappmysqldsvc.samar1.svc.occne1-arjun-sreenivasalu) id=222 (not connected, accepting connect from any host) id=223 (not connected, accepting connect from any host) id=224 (not connected, accepting connect from any host) id=225 (not connected, accepting connect from any host)
Note:
Node IDs 222 to 225 in the sample output are shown as "not connected" as these are added as empty slot IDs that are used for georeplication recovery. - Alternatively, you can run the following Helm test command to verify if
the cnDBTier services are installed
successfully:
$ helm test mysql-cluster --namespace ${OCCNE_NAMESPACE}
Sample output:NAME: mysql-cluster LAST DEPLOYED: Mon May 20 10:22:58 2024 NAMESPACE: occne-cndbtier STATUS: deployed REVISION: 1 TEST SUITE: mysql-cluster-node-connection-test Last Started: Mon May 20 14:15:18 2024 Last Completed: Mon May 20 14:17:58 2024 Phase: Succeeded
2.3.3 Postinstallation Tasks
This section explains the postinstallation tasks for cnDBTier.
After CNE and cnDBTier installation, use these procedures to configure cnDBTier passwords, load cnDBTier alerts to Prometheus and Prometheus Operator, and import Grafana dashboards.
Prerequisites
Before performing the postinstallation tasks, ensure that the following prerequisites are met:
- All the Common Services are installed.
- cnDBTier is installed.
- Commands are run on the Bastion Host.
2.3.3.1 Changing cnDBTier Passwords
This section provides information about changing cnDBTier passwords.
After installing cnDBTier, it is recommended to change all the cnDBTier passwords including the DB root password which is randomly generated at the time of installation. Refer to the "Changing cnDBTier Passwords" section in the Oracle Communications Cloud Native Core, cnDBTier User Guide to change cnDBTier passwords.
2.3.3.2 Updating cnDBTier Alerts in Prometheus
This section provides the procedure to update cnDBTier alert rules configuration in Prometheus.
values.yaml
file:prometheusOperator:
alerts:
enable: false
2.3.3.3 Loading cnDBTier Alerts to OSO Prometheus
This section describes the procedure to load cnDBTier alert rules to Prometheus.
- Run the following command to take a backup of the current Prometheus
configuration
map:
$ kubectl get configmaps <OSO-prometheus-configmap-name> -o yaml -n <namespace> > /tmp/tempPrometheusConfig.yaml
- Check and add cnDBTier alert file name in the Prometheus
configuration
map:
$ sed -i '/etc\/config\/alertscndbtier/d' /tmp/tempPrometheusConfig.yaml $ sed -i '/rule_files:/a\ \- /etc/config/alertscndbtier' /tmp/tempPrometheusConfig.yaml
- Update the configuration map with the updated file name of
alertscndbtier
alert file:$ kubectl -n <namespace> replace configmap <OSO-prometheus-configmap-name> -f /tmp/tempPrometheusConfig.yaml
- Add cnDBTier alert rules in the configuration map, under the file
name of cnDBTier alert
file:
where,$ export OCCNE_VERSION=<OCCNE_VERSION> $ kubectl patch configmap <OSO-prometheus-configmap-name> -n <namepspace> --type merge --patch "$(cat occndbtier_alertrules_${OCCNE_VERSION}.yaml)"
OCCNE_VERSION
is the cnDBTier version. For example: 24.3.1Note:
You can find theoccndbtier_alertrules_${OCCNE_VERSION}.yaml
file in theArtifacts/Scripts
directory.
2.3.3.4 Loading cnDBTier Alerts to Prometheus Operator
This section describes the procedure to load cnDBTier alert rules to Prometheus Operator.
- Run the following commands to add cnDBTier alert rules in prometheusrules using the
occndbtier_alertrules_promha_${OCCNE_VERSION}.yaml
file:
where,$ export OCCNE_VERSION=<OCCNE_VERSION> $ kubectl apply -f /var/occne/cluster/${OCCNE_CLUSTER}/csar_occndbtier_${OCCNE_VERSION}/Artifacts/Scripts/occndbtier_alertrules_promha_${OCCNE_VERSION}.yaml --namespace <namespace>
OCCNE_VERSION
is the cnDBTier version. For example: 24.3.1For example:$ export OCCNE_VERSION=24.3.1 $ kubectl apply -f /var/occne/cluster/${OCCNE_CLUSTER}/csar_occndbtier_${OCCNE_VERSION}/Artifacts/Scripts/occndbtier_alertrules_promha_${OCCNE_VERSION}.yaml --namespace occne-cndbtier
Sample output:prometheusrule.monitoring.coreos.com/occndbtier-alertrules created
Note:
The custom role (role: cnc-alerting-rules
) inoccndbtier_alertrules_promha_${OCCNE_VERSION}.yaml
must be configured in thekube-prometheus-custom-values.yaml
file as well. - Run the following command to check the addition of cnDBTier alert file
to prometheus
rules:
$ kubectl get prometheusrules --namespace <namespace>
For example:$ kubectl get prometheusrules --namespace occne-cndbtier
Sample output:NAME AGE occndbtier-alerting-rules 1m
2.3.3.5 Configuring cnDBTier Alerts in OCI
The following procedure describes how to configure the cnDBTier alerts for OCI. The OCI supports metric expressions written in MQL (Metric Query Language) and thus, requires a new cnDBTier alert file for configuring alerts in OCI observability platform.
Note:
Theoccndbtier_oci_alertrules_${OCCNE_VERSION}.zip
file is available
in the Scripts folder of CSAR package.
- Run the following command to extract the
occndbtier_oci_alertrules_${OCCNE_VERSION}.zip
file:
You will find theunzip occndbtier_oci_alertrules_${OCCNE_VERSION}.zip
occndbtier_oci
folder within theoccndbtier_oci_alertrules_${OCCNE_VERSION}.zip
folder. - Open the
occndbtier_oci
folder and look for thenotifications.tf
file. - Open the
notifications.tf
file and update theendpoint
parameter with the email ID of the user. - Log in to the OCI Console.
Note:
For more information about logging in to the OCI, see Signing In to the OCI Console. - Open the navigation menu and select Developer Services.
The Developer Services window appears in the right pane.
- Under the Developer Services, select Resource Manager.
- Under Resource Manager, select Stacks.
The Stacks window appears.
- Click Create Stack.
- Select the default My Configuration radio button.
- Under Stack configuration, select folder and upload the
occndbtier_oci
folder. - Enter the Name and Description, and select the compartment.
- Select the latest Terraform version from the Terraform version drop-down.
- Click Next.
The Edit Stack screen appears.
- Enter the required inputs to create the cnDBTier alerts or alarms.
- Click Save and Run Apply.
- Verify that the alarms are created in the Alarm Definitions screen (OCI Console>
Observability & Management> Monitoring>Alarm Definitions).
The required inputs are:
- Alarms Configuration
- Compartment Name: Choose the name of compartment from the drop-down.
- Metric namespace: Metric namespace that the user provided while deploying OCI Adaptors.
- Topic Name: Any user configurable name. It must contain fewer than 256 characters. Only alphanumeric characters plus hyphens (-) and underscores (_) are allowed.
- Message Format: Retain this as ONS_OPTIMIZED (this is pre-populated).
- Alarm is_enabled: Retain this as True (this is pre-populated).
- Alarms Configuration
For more information, see Oracle Communications Cloud Native Core OCI Adaptor, NF Deployment in OCI Guide
2.3.3.6 Importing Grafana Dashboards into Grafana GUI
This section provides information about importing Grafana dashboards into Grafana GUI.
$ export OCCNE_VERSION=<OCCNE_VERSION>
where,
OCCNE_VERSION
is the cnDBTier version. For example: 24.3.1- For Prometheus, use the
occndbtier_grafana1.8_dashboard_${OCCNE_VERSION}.json
file to import the Grafana dashboards into Grafana GUI. - For Prometheus Operator, use the
occndbtier_grafana_promha_dashboard_${OCCNE_VERSION}.json
file to import the Grafana dashboards into Grafana GUI.
2.3.3.7 Configuring cnDBTier Metrics Dashboard in OCI
This section describes about the steps to upload the
occndbtier_oci_dashboard_${OCCNE_VERSION}.json
file on OCI Logging
Analytics Dashboard Service. As OCI doesn't support Grafana, it uses the Logging Analytics
Dashboard Service for visualizing the metrics and logs.
Note:
Theoccndbtier_oci_dashboard_${OCCNE_VERSION}.json
file is available
in the Scripts folder of CSAR package.
- Log in to OCI Console.
Note:
For more information about logging in to the OCI, see Signing In to the OCI Console. - Open the navigation menu and click Observability & Management.
- Under Logging Analytics, Click Dashboards.
The Dashboards page appears.
- Choose Compartment on the left pane.
- Click Import dashboards.
- Select and upload the
occndbtier_oci_dashboard_${OCCNE_VERSION}.json
file.Customize the following parameters of the JSON file before uploading the JSON file:##COMPARTMENT_ID
: The OCID of the compartment.##METRIC_NAMESPACE
: The metrics namespace that the user provided while deploying the OCI adaptor.##K8_NAMESPACE
: The Kubernetes namespace where cnDBTier is deployed.
- On the Import dashboard page that appears, click Import.
User can view the imported dashboard and can view the metrics on the dashboard.
Note:
cnDBTier has organized the panels or widgets in different dashboards to support cnDBTier metrics and all dashboards are clubbed into a single JSON file.
For more information, see Oracle Communications Cloud Native Core OCI Adaptor, NF Deployment in OCI Guide