3 cnDBTier Features
This chapter describes the key features of cnDBTier.
3.1 Monitoring Cluster Events to Determine Data Loss in Clusters
- http://<base-uri>/db-tier/reset/parameter/cluster_restart_disconnect
- http://<base-uri>/db-tier/reset/cluster/{cluster-name}/parameter/cluster_restart_disconnect
- http://<base-uri>/db-tier/cluster/status
- http://base-uri/db-tier/cluster/status/events/{numberOfLastEvents}
- http://<base-uri>/db-tier/all/cluster/status/
- http://<base-uri>/db-tier/all/cluster/status/events/{numberOfLastEvents}
- Cluster install start
- Cluster restart
- Cluster checkpoint
- Cluster restore
- Cluster re-sync
- Cluster failure
Metrics
cnDBTier uses the existing db_tier_cluster_disconnect
metric to
record the number of times a cluster disconnects. For more information about this
metric, see cnDBTier Node Status Metrics.
Alerts
cnDBTier uses the existing MYSQL_NDB_CLUSTER_DISCONNECT
alert to
inform about the data loss in the cluster. For more information about this alert,
see cnDBTier Cluster Status Alerts.
3.2 Storing NDB Logs in PVC
cnDBTier stores all NDB pod (ndbmgmd
,
ndbmtd
, ndbmysqld
, ndbappmysqld
)
logs in PVC. These logs remain persistent even when the pods restart and they can be
used to debug any data node related issues. This feature is enabled in cnDBTier by
default and doesn't require any configuration.
ndbmtd
pod logs:
ndbmtd
pod logs are stored in the
data_node_<node_id>.log
file in the
/var/occnedb/dbdata/data
directory. When the size of a log
file exceeds 200 MB, the log file is archived and stored in the same directory in
the following file name format:
data_node_<node_id>_<file_creation_timestamp>.log.gz
.
The system archives a maximum of five such log files and deletes the older ones.
ndbmysqld
or ndbappmysqld
server logs:
mysqld
server logs are stored in the
mysqld.log
file in the /var/occnedb/mysql
directory. When the size of a log file exceeds 200 MB, the log file is archived and
stored in the same directory in the following file name format:
mysqld.log.old.<serial_number>
(for example,
mysqld.log.old.1
). The system archives a maximum of five such
log files and deletes the older ones.
ndbmysqld
general logs:
If /global/api/general_log
is set to
true, ndbmysqld
pod logs are stored in the
ndbmysqld_general_log.log
file. When the size of a log file
exceeds 200 MB, the log file is archived and stored in the same directory in the
following file name format:
ndbmysqld_general_log.log.old.<serial_number>
(for example,
ndbmysqld_general_log.log.old.1
). The system archives a maximum
of five such log files and deletes the older ones.
ndbmgmd
cluster logs:
ndbmgmd
pod logs are stored in the
ndbmgmd_cluster.log
file in the
/var/occnedb/mysqlndbcluster
directory. When the size of a log
file exceeds 10 M, the log file is archived and stored in the same directory in the
following file name format: cluster.log.<serial_number>
(for
example, cluster.log.1
). The system archives a maximum of ten such
log files and deletes the older ones.
3.3 Enhanced Georeplication Recovery using Parallel Backup Transfer and Restore
- Avoids additional backup compression.
- Supports parallel backup transfer from healthy cluster to georeplication recovery cluster.
- Restores data node backups in parallel, as and when the backups are transferred from the healthy cluster to the georeplication recovery cluster.
Table 3-1 Cluster Configuration
Cluster Detail | Configuration |
---|---|
Number of data nodes in each cluster | 8 |
CPU count of backup manager service | 1 |
RAM size of backup manager service | 1Gi |
CPU count of each node in backup executor service | 1 |
RAM size of each node in backup executor service | 1Gi |
CPU count of replication service | 2 |
RAM size of replication service | 2Gi |
Table 3-2 Georeplication Recovery Time when Parallel Backup is Enabled or Disabled
Data Size Per Node (GB) | Parallel Backup Transfer and Restore (Enabled or Disabled) | Time Taken for Georeplication Recovery (Seconds) | Improvement in Performance (%) |
---|---|---|---|
5 | Disabled | 1255 | NA |
5 | Enabled | 1125 | 10.92 |
10 | Disabled | 1858 | NA |
10 | Enabled | 1565 | 17.12 |
15 | Disabled | 2503 | NA |
15 | Enabled | 1997 | 22.49 |
20 | Disabled | 3074 | NA |
20 | Enabled | 2439 | 23.04 |
25 | Disabled | 3723 | NA |
25 | Enabled | 2859 | 26.25 |
Note:
The performance measures provided in the table are as per the sample data used for testing the feature on a sample cnDBTier environment, and are not the promised values. The actual performance varies from system to system depending on various factors including cluster configuration and data load.Managing Parallel Backup Transfer and Restore
The following sections provide details about managing parallel backup transfer and restore:
/global/parallelbackuptransferandrestore/enable
parameter to
true or false respectively in the
custom_values.yaml
file. For more information about enabling and
configuring this feature, see the "customizing cnDBTier" section in Oracle Communications Cloud Native Core, Cloud Native Environment Installation,
Upgrade, and Fault Recovery Guide.
Note:
If you want to enable or disable parallel backup transfer and restore after installing cnDBTier, update the parameter in thecustom_values.yaml
file and perform an upgrade with the updated
custom_values.yaml
file. For the procedure to upgrade cnDBTier, see
Oracle Communications Cloud Native Core, Cloud Native Environment
Installation, Upgrade, and Fault Recovery Guide.
- db_tier_local_backup_transfer_progress
- db_tier_remote_backup_transfer_progress
3.4 Transparent Data Encryption (TDE)
Transparent Data Encryption (TDE) encrypts data at the storage layer, that is the files stored in the disk or PVC of the data nodes. TDE encrypts and decrypts data dynamically as it is written to or read from the storage, without requiring any modifications to the application’s code. This guarantees that the sensitive data stored in the database files on disk remains encrypted while at rest, offering a crucial security layer against unauthorized access, particularly in situations where physical security controls fail.
Table 3-3 Data Node Restart Time When TDE is Enabled or Disabled
Data Size Per Node (in GB) | TDE Enabled or Disabled | Average Restart Time (in seconds) | Increase in Restart Time when TDE is Enabled (in percentage) |
---|---|---|---|
10 | Disabled | 166.5 | NA |
10 | Enabled | 181.5 | 9% |
20.7 | Disabled | 216 | NA |
20.7 | Enabled | 242.5 | 12.27% |
30 | Disabled | 329 | NA |
30 | Enabled | 349.75 | 6.32% |
37 | Disabled | 376 | NA |
37 | Enabled | 397.5 | 5.72% |
Note:
Analysis and testing of this feature with sample data reveal that enabling TDE increases the restart time of data nodes by an average of 8.33%. However, the performance vary depending on your cnDBTier setup and data size. Ensure that you test and understand the performance impact while using TDE for data encryption.Managing Transparent Data Encryption (TDE)
The following sections provide details about managing Transparent Data Encryption (TDE):
You can enable or disable the TDE feature using the
/global/ndb/EncryptedFileSystem
parameter in the
custom_values.yaml
file. For more information about enabling and
disabling this feature, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
Before enabling TDE, you must create the necessary secrets that are used to encrypt the data in data nodes. For information about creating secrets for TDE, see the "Creating Secret" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
3.5 Support for CNE Cloud Native Load Balancer (CNLB)
- Configure the ingress and egress destination subnet addresses in the
cnlb.ini
file before installing CNE. For more information about ingress and egress destination subnet addresses, see the Ingress and Egress Communication Over External IPs section. - Configure the required traffic segregation details in the
custom_values.yaml
file before installing cnDBTier.
Managing Cloud Native Load Balancer (CNLB)
The following sections provide details on managing the CNLB feature:
As CNLB is powered by CNE, you can enable or disable CNLB while installing CNE. For more information about enabling and configuring CNLB, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
- To use network segregation for active and standby replication channels,
configure the required CNLB annotations for
ndbmysql
pods in thecustom_values.yaml
file. - To allow communication between local and remote site replication pods
over a separate network, configure CNLB ingress or egress annotations for the
db-replication-svc
pods in thecustom_values.yaml
file.
3.6 Multithreaded Applier (MTA)
The Multithreaded Applier (MTA) feature provides the functionality allows independent binlog transactions to be run in parallel at a replica thereby increasing the peak replication throughput. NDB replication is modified to support the use of generic MySQL server MTA mechanism.
Managing Multithreaded Applier (MTA)
The following sections provide details on managing the MTA feature:
You can enable or disable the MTA feature using the
replica_parallel_workers
parameter in the
custom_values.yaml
file. This feature is disabled when
replica_parallel_workers
is set to 1 and enabled when the
parameter is set to a value greater than 1. This feature is disabled by default.
For more information on this parameter, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
You can refer to the
following sample configurations to enable or disable MTA in the
custom_values.yaml
file:
global:
additionalndbconfigurations:
mysqld:
replica_parallel_workers: 4
binlog_transaction_dependency_tracking: "WRITESET"
ndb_log_transaction_dependency: 'ON'
global:
additionalndbconfigurations:
mysqld:
replica_parallel_workers: 1
binlog_transaction_dependency_tracking: "COMMIT_ORDER"
ndb_log_transaction_dependency: 'OFF'
For more information on the MTA feature, see MySQL documentation.
Additional configurations are not applicable to this feature.
3.7 Support for TLS
Transport Layer Security (TLS) is a cryptographic protocol designed to safeguard the communication between a client and a server. This protocol ensures that private and sensitive information such as passwords are transferred over encrypted communication channels. cnDBTier supports both TLS 1.2 and TLS 1.3.
- Georeplication between cnDBTier Sites:
Earlier, cnDBTier provided support to configure the replication SQL pods manually to establish Transport Layer Security (TLS) for georeplication between cnDBTier sites. With this feature, cnDBTier automates the process of configuring TLS for georeplication between cnDBTier sites.
When TLS feature is enabled for replication, cnDBTier performs or supports the following operations during replication:- The replication SQL pod uses the certificates provided or configured to establish an encrypted connection for georeplication. This encrypted connection remains throughout the life cycle of the replication. When the replication breaks and a switchover occurs, the standby replication SQL pod establishes a new replication channel using the given certificates and provides the encrypted connection.
- cnDBTier reestablishes the TLS connection after a georeplication recovery. That is, when a georeplication recovery completes successfully, the system reestablishes the encrypted connection between the replication channels. This ensures that the TLS is kept intact even after a georeplication recovery.
- cnDBTier supports maintaining separate certificates for each replication channel groups and each site replicating to other site.
- cnDBTier supports maintaining list of chipers used for replication.
- Communication between cnDBTier and Network Functions:
cnDBTier supports TLS for application SQL pods to establish secure connection for communication with Network Functions (NFs).
When TLS feature is enabled, cnDBTier performs or supports the following operations during communication with NFs:- The application SQL pod uses the certificates provided or configured to establish an encrypted connection for communication with NFs. This encrypted connection remains throughout the life cycle of the connection between NF and cnDBTier.
- cnDBTier reestablishes the TLS connection between NFs and cnDBTier after a georeplication recovery. That is, when a georeplication recovery completes successfully, the system reestablishes the encrypted connection between the NFs and cnDBTier. This ensures that the TLS is kept intact even after a georeplication recovery.
Managing TLS
The following sections provide details about managing TLS:
custom_values.yaml
file as follows:
- To enable TLS for secure georeplication, use the
/global/tls/enable
parameter in thecustom_values.yaml
file. You must also configure the other TLS parameters to provide the necessary certificates for thendbmysqld
pods in thecustom_values.yaml
file. For the procedure to enable TLS for georeplication, see Enabling TLS for Georeplication. For more information about the TLS parameters, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide. - To enable TLS for a secure connection between cnDBTier and NFs, use the
/global/ndbappTLS/enable
parameter in thecustom_values.yaml
file. You must also configure the other TLS parameters to provide the necessary certificates for thendbappmysqld
pods in thecustom_values.yaml
file. For the procedure to enable TLS for communication between cnDBTier and NFs, see Enabling TLS for Communication with NFs. For more information about the TLS parameters, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
You can modify the cnDBTier certificates that the system uses to establish encrypted connections for georeplication and communication with NFs. For procedures, see Modifying cnDBTier Certificates to Establish TLS Between Georeplication Sites and Modifying cnDBTier Certificates to Establish TLS for Communication with NFs.
3.8 Network Policies
Network Policies is an application-centric construct that allows cnDBTier pods to control or restrict the incoming and outgoing network traffic. This ensures security and isolation of services within a Kubernetes cluster.
Network Policies creates pod-level rules to specify how pods (Containers) in a Kubernetes cluster communicate with each other and with external resources. cnDBTier uses Network Policies on the pods to enhance the security and control both incoming and outgoing traffic effectively. For more information about Network Policies, see Kubernetes Network Policies documentation.
Enabling and Configuring Network Policies
You can enable or disable network policies for
different pods using the network policy parameters
in the custom_values.yaml
file.
For more information about enabling or disabling
this feature for cnDBTier pods, see the
"Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
3.9 REST APIs for CNC Console
- Checking the status of cnDBTier cluster or georeplication.
- Checking the list of completed backups.
- Checking cnDBTier version.
- Checking the health status of cnDBTier services such as replication service, monitor service, data service, and backup manager service.
- Initiating on-demand backup.
- Performing georeplication recovery.
menuCncc.json
file as shown in the following
example:{
"menu": {
"routeConfig": {
"home": {
"label": "Home",
"value": "home",
"isDefault": true
},
"commonServices": {
"label": "Common Services",
"value": "commonServices",
"isDefault": true
},
"services/{nf}/{service}": {
"label": "Services",
"value": "services"
},
"configurations/{nf}/{configType}": {
"label": "Configurations",
"value": "configurations"
}
},
"menuItems": [
{
"attr": {
"id": "cnDBTier",
"name": "cnDBTier",
"sequence": 10
},
"children": [
{
"attr": {
"id": "services/<NF Name>/dbconfig-backupList",
"name": "Backup List",
"sequence": 10
}
},
{
"attr": {
"id": "cnDBTierHealth",
"name": "cnDBTier Health",
"sequence": 20
},
"children": [
{
"attr": {
"id": "services/<NF Name>/dbconfig-backupHealthStatus",
"name": "Backup Manager Health Status",
"sequence": 100
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-monitorHealthStatus",
"name": "Monitor Health Status",
"sequence": 110
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-ndbHealthStatus",
"name": "NDB Health Status",
"sequence": 120
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-replicationHealthStatus",
"name": "Replication Health Status",
"sequence": 130
}
}
]
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-version",
"name": "cnDBTier Version",
"sequence": 30
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-databaseStatisticsReport",
"name": "Database Statistics Report",
"sequence": 40
}
},
{
"attr": {
"id": "GeoreplicationRecovery",
"name": "Georeplication Recovery",
"sequence": 50
},
"children": [
{
"attr": {
"id": "services/<NF Name>/dbconfig-updateClusterAsFailed",
"name": "Update Cluster As Failed",
"sequence": 100
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-startGeoreplicationRecovery",
"name": "Start Georeplication Recovery",
"sequence": 110
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-georeplicationRecoveryStatus",
"name": "Georeplication Recovery Status",
"sequence": 120
}
}
]
},
{
"attr": {
"id": "configurations/<NF Name>/dbconfig-geoReplicationStatus",
"name": "Georeplication Status",
"sequence": 60
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-clusterStatus",
"name": "Local Cluster Status",
"sequence": 70
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-onDemandBackup",
"name": "On Demand Backup",
"sequence": 80
}
},
{
"attr": {
"id": "services/<NF Name>/dbconfig-heartBeatStatus",
"name": "Replication HeartBeat Status",
"sequence": 90
}
}
]
}
]
}
}
- For procedures to perform georeplication recovery using CNC Console, see the "Restoring Georeplication (GR) Failure" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
- For information about CNC Console GUI, see Oracle Communications Cloud Native Configuration Console User Guide.
- For more information about integrating cnDBTIer APIs on CNC Console for a specific NF, see the respective NF documents.
3.10 PVC Health Monitoring
The platform-level issues and hardware failures impact the PVC health that in turn impacts the cnDBTier health. Before implementing this feature, to identify any PVC issues, you must wait for the MySQL Cluster processes to log an error. With this feature, cnDBTier provides options to monitor the health of PVCs that are mounted on cnDBTier pods.
Managing PVC Health Monitoring
The following sections provide details about managing PVC Health Monitoring:
You can enable or
disable PVC Health Monitoring using the PVC health
global parameters available in the
custom_values.yaml
file. For more
information about enabling or disabling this
feature, see the "Customizing cnDBTier" section in
Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
PVC Health Monitoring facilitates each pod to monitor its PVC and pass the PVC health condition to the monitor service. The monitor service combines this data and produces metrics for each PVC's health. For information about PVC Health Monitoring metrics, see cnDBTier Metrics.
3.11 cnDBTier Automated Backup
The cnDBTier backup service creates and safely stores database backups periodically so that the user can always have a relatively recent copy of the data, to recover from cnDBTier fault scenarios. cnDBTier backup service works on each MySQL cluster data node. Each data node creates a backup of the hosted data and maintains a copy of the backup.
The automatic backup runs when all the database nodes are running. The retention period and frequency of the backup are configurable. The status of the database backups taken is stored in the database nodes and is available to database monitor service to include in metrics. The following table describes these parameters and their default values.
Table 3-4 DB Backup Service
Parameter | Units | Default | Description |
---|---|---|---|
backup_period | integer | 1 | The number of days between each backup. |
num_backups_to_retain | integer | 3 | The number of backups to retain. Recommended number of backups is at least 3. This ensures at least one backup remains, in case the automated backup system force purges additional backups, due to backup size growth. |
Monitoring Automated Backups
The status of the database backups is stored in the database nodes and is accessible to the database Monitor service. The system maintains metrics and alerts to monitor and track automatic backups. Additionally, it triggers an alert when the current NDB cluster performs a backup. For information about automatics backup metrics, see cnDBTier Automated Backup Metrics. For information about automatic backup alerts, see cnDBTier Automated Backup Alerts.
REST APIs
cnDBTier provides the
http://<base-uri>/db-tier/status/cluster/local/backup
REST
API to fetch the details of the data node backups that are in progress in the NDB
cluster. For more information about this REST API, see cnDBTier Backup APIs.
Note:
MySQL doesn't allow schema change in a cluster when a backup is in progress. Therefore, ensure that you don't make any schema changes in a cluster when a data node backup is in progress. You can use thehttp://<base-uri>/db-tier/status/cluster/local/backup
API to:
- check if the current cluster is performing a database back up.
- retrieve the timestamp of the next scheduled routine backup.
- determine the time window for NF upgrades.
3.12 cnDBTier Password Encryption
cnDBTier provides the password encryption feature to encrypt for replication username and password stored in the database to ensure that the passwords are secure and are not exposed. When the password encryption feature is enabled, the replication username and password are encrypted throughout the life cycle of cnDBTier unless the feature is disabled.
Managing cnDBTier Password Encryption
The following sections provide details about managing cnDBTier password encryption:
You can enable or disable the cnDBTier password encryption
feature using the /global/encryption/enable
parameter in the
custom_values.yaml
file. For more information about enabling and
disabling this feature, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
Before enabling cnDBTier password encryption, you must create the necessary secrets that are used to encrypt the passwords. For information about creating secrets for password encryption, see the "Creating Secret" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
You can modify the encryption key used to encrypt the replication username and password. For the procedure to modify the password encryption key, see the Modifying cnDBTier Password Encryption Key section.
3.13 Database Backup Encryption and Secure Transfer
cnDBTier supports creation of database backups at configured intervals which are used to restore database during a georeplication recovery. These backups contain sensitive subscriber data that are transferred to remote servers and sites for performing fault recovery. Therefore, it's important to encrypt the backups to protect the sensitive data and avoid any misuse. cnDBTier uses the encrypt and decrypt options provided by MySQL NDB cluster to encrypt the on-demand or periodic backups at the cluster level.
<remote server
path>/<cnDBTier site name>
path where,
<remote server path>
, is the path of the remote server configured in the/global/remotetransfer/remoteserverpath
parameter in thecustom_values.yaml
file.<cnDBTier site name>
, is the name of the cnDBTier site.
backup_<backup_id>_Encrypted.zip
, if backup encryption is enabled.backup_<backup_id>_Unencrypted.zip
, if backup encryption is disabled.
Managing Database Backup Encryption and Secure Transfer
The following sections provide details about managing database backup encryption and secure backup transfer:
Each cnDBTier cluster has an option to enable or disable the
backup encryption independently. You can enable or disable the database encryption using
the /global/backupencryption/enable
parameter in the
custom_values.yaml
file. For more information about enabling and
disabling this feature, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
When this feature is enabled, the backup data record and log files written by each data node are encrypted using the password present in the "occne-backup-encryption-secret" secret and a Salt that is randomly generated using a key derivation function (KDF) that employs the PBKDF2-SHA256 algorithm to generate a symmetric encryption key for that file.
You must provide the required credentials to encrypt the
backups when the backup is initiated (on-demand backups, periodic backups, or backups
initiated during restore georeplication). cnDBTier uses secret keys to encrypt database
backups. You can configure these keys using Kubernetes secret during the installation.
When the START BACKUP
command is initiated, the DB backup manager service
reads these keys and initiates the backup using the encrypt password(ENCRYPT
PASSWORD=password). For information about creating secrets for encrypting backups, see the
"Creating Secret" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
Note:
The password used for encryption must follow the cnDBTier password requirement or standard.You can enable or disable secure
backup transfer using the /global/remotetransfer/enable
parameter in
the custom_values.yaml
file. For more information about enabling and
disabling this secure backup transfer, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
cnDBTier requires the remote server configurations to securely transfer backups. You
can use the custom_values.yaml
file to configure the details about
remote servers such as remote server IP, port, and path. These configurations can be
done during an installation or upgrade. For more information about configuring the
remote server details, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
cnDBTier uses SSH key and username to securely transfer backups to remote servers. You can configure these keys and username using Kubernetes secret during the installation or upgrade. For information about creating secrets for secure backup transfer, see the "Creating Secret" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
cnDBTier also provides an option to modify the remote server configurations, such as IP, port, path, usernname, and SSH key in cnDBTier. For more information about modifying remote server configurations in cnDBTier, see Modifying Remote Server Configurations for Secure Transfer of Backups.
Note:
Currently, cnDBTier doesn't support using the remote server backups for performing any restore or recovery procedures.3.14 Node Selector and Node Affinity
The Node Selector and Node Affinity feature allows the Kubernetes scheduler to determine the type of nodes in which the cnDBTier pods are to be scheduled, depending on the predefined node labels or constraints. cnDBTier uses nodeSelector and nodeAffinity options to schedule cnDBTier pods to specific worker nodes.
nodeSelector is the basic form of cluster node selection constraint. It allows you to define the node labels (constraints) in the form of key-value pairs. When the nodeSelector feature is used, Kubernetes schedules the pods to only the nodes that match each of the node labels you specify.
- requiredDuringSchedulingIgnoredDuringExecution (Hard type): In this type, the scheduler doesn't schedule the pods unless the defined rules are met.
- preferredDuringSchedulingIgnoredDuringExecution (Soft type): In this type, the scheduler tries to find a node that meets the defined rules. However, even if a matching node is not available, the scheduler still schedules the pod.
Managing Node Selector and Node Affinity Feature
The following sections provide details on managing the Node Selector and Node Affinity feature:
nodeSelector
and
nodeAffinity
options for the pods using the
custom_values.yaml
file. For more information on enabling and disabling
parameters, see the "Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
Note:
- You can enable either nodeSelector or nodeAffinity option to schedule cnDBTier pods to specific worker nodes.
- By default, both nodeSelector and nodeAffinity options are disabled.
custom_values.yaml
file. For more
information on the node selector and node affinity configuration parameters, see the
"Customizing cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade,
and Fault Recovery Guide.
Note:
Configure the node selector or node affinity parameters only when either of the options is enabled.3.15 cnDBTier Scaling
- Horizontal Scaling: Horizontal scaling refers to adding additional nodes to the
pool of resources to share the load. cnDBTier supports horizontal scaling for
the following pods:
ndbappmysqld
: cnDBTier supports both manual and auto-scaling ofndbappmysqld
pods. When horizontal auto-scaling is enabled,ndbappmysqld
scales automatically depending on the CPU and RAM usage. In this case, the higher and lower limits for scaling are defined in thecustom_values.yaml
file. When auto-scaling is disabled, cnDBTier provides an option to manually scale thendbappmysqld
pods as per your requirement.ndbmtd
: cnDBTier supports only manual scaling up ofndbmtd
data pods, and doesn't support scaling down.
- Vertical Scaling: Vertical scaling refers to increasing the processing power
(CPU, memory, and PVC) of a cluster. cnDBTier supports manual vertical scaling
for the following pods:
ndbmtd
ndbappmysqld
ndbmysqld
db-replication-svc
ndbmgmd
dbtscale_vertical_pvc
script to automatically update the PVC values of the pods.
For horizontal and vertical scaling procedures, see Scaling cnDBTier Pods.
Managing cnDBTier Scaling
The following sections provide details on managing the cnDBTier scaling feature:
ndbappmysqld
pods using the
/global/autoscaling/ndbapp/enabled
parameter in the
custom_values.yaml
file. You can enable or disable
horizontal auto-scaling on the basis of memory consumption, CPU consumption, or
both using the following parameters:
/api/ndbapp/horizontalPodAutoscaler/memory/enabled
/api/ndbapp/horizontalPodAutoscaler/cpu/enabled
- /api/ndbapp/horizontalPodAutoscaler/memory/averageUtilization
- /api/ndbapp/horizontalPodAutoscaler/cpu/averageUtilization
3.16 Application Service Mesh (ASM) for External Communication
The cnDBTier now supports secure external communication through ASM (Application Service Mesh) by selectively applying Istio sidecar injection only to pods where external communication is involved. By selectively enforcing ASM policies, cnDBTier ensures the security and compliance of its external interfaces, while preserving high performance and simplifying manageability within the cnDBTier ecosystem.
To enable ASM for both internal and external communication, the
global.istioSidecarInject.mode
parameter must be configured in the
custom_values.yaml
file. For more information on how to
configure the parameter, see the "Global Parameters" section in Oracle Communications
Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
Following is a sample configuration:
global:
istioSidecarInject:
mode: <none|external|all>
Where:
none
- Do not inject sidecarexternal
- Inject only for external communicationall
- Inject for all communication (both internal and external)
3.17 Support for Automated Certificate Lifecycle Management
In cnDBTier 25.1.2xx and earlier, the Transport Layer Security (TLS) certificates were managed manually. When multiple instances of cnDBTier were deployed in a 5G network, certificate management, such as certificate creation, renewal, removal, and so on, became tedious and error-prone.
Starting with cnDBTier 25.2.1xx, you can integrate cnDBTier with Oracle Communications Cloud Native Core, Certificate Management (OCCM) to support automation of certificate lifecycle management. OCCM manages TLS certificates stored in Kubernetes secrets by integrating with Certificate Authority (CA) using the Certificate Management Protocol Version 2 (CMPv2) protocol in the Kubernetes secret. OCCM obtains and signs TLS certificates within the cnDBTier namespace. For more information about OCCM, see Oracle Communications Cloud Native Core, Certificate Management User Guide.
This feature enables automated TLS certificate lifecycle management for HTTPS, MySQL replication SQL pods and MySQL application SQL pods within the cnDBTier environment, ensuring secure and uninterrupted communication during certificate updates.
Certificate Monitoring for HTTPS and TLS Certificates
A certificate monitoring service is responsible for ensuring that TLS/HTTPS certificates are continuously observed for changes and updated in a controlled, zero-downtime manner.
- Continuous Monitoring: The monitoring service continuously watches TLS/HTTPS certificates for any updates.
- Change detection: The monitoring services detects the change whenever a TLS/HTTPS certificate or secret is updated, and appropriate reload actions are initiated based on the pod type and connection behavior.
- No Service restart: The monitoring service does not force an immediate restart or a full service disruption is performed, in case of a change in the certificate alone.
Replication SQL Pods for Monitoring the TLS Certificate
The Replication SQL pods are responsible for MySQL replication between the cnDBTier sites. TLS is used for securing MySQL replication traffic.
When a Certificate change is detected, the following is the workflow:
- Monitoring for Changes: A monitoring service detects for any changes in the certificates. Upon certificate update, the change is detected and validated.
- Reloading Certificate on Detection: The updated certificates are reloaded into the replication SQL pod. However, no restart or switchover is triggered immediately.
- Deferred Usage of New Certificates: The new certificates are loaded, but not actively used in ongoing replication sessions. The active replication sessions continue using the old TLS session until a natural replication session switchover event occurs, for example new TLS handshake or replication session restart, to begin using the new certificate.This avoids disruption to ongoing replication sessions.
APP SQL Pods for Monitoring the TLS Certificate
The APP SQL pods handle application-related SQL traffic, primarily servicing Network Function (NF) workloads. These workloads often involve high-throughput, low-latency, and long-lived connections, making seamless TLS handling crucial for availability and stability.
On Certificate update, following is the workflow:
- The pod reloads the TLS so that APP SQL pod is loaded with new certificate.
- Existing NF connections remain active and active sessions continue to use the old certificate to prevent disruption.
- New connections are automatically established using the new Certificate.
Scheduled Switchover via Cron Job for Replication SQL and APP SQL Pods
A cron job is scheduled to run at predefined intervals to enforce a switchover or reload when a Certificate change is detected. This ensures updated Certificates are applied promptly, avoiding the risks associated with outdated or expired Certificates.
In case of Replication SQL pods, if the Certificate change has occurred but no natural switchover has happened, then the cron job triggers a forced reload or a switchover, applying the new Certificate.
In case of APP SQL pods:
- The pod reloads the TLS so that APP SQL pod is loaded with new certificate.
- Existing NF connections remain active and active sessions continue to use the old Certificate to prevent disruption.
- New connections are automatically established using the new Certificate.