3 Customizing cnDBTier

This section describes the configuration parameters required during the installation of cnDBTier.

Note:

This section contains only the frequently used cnDBTier parameters that are available in the custom_values.yaml file. The custom_values.yaml file contains additional MySQL parameters that can be configured as per your requirement. For detailed list of MySQL configurations and their usage, refer to MySQL Documentation.

3.1 Global Parameters

The following table provides a list of global parameters.

Table 3-1 Global Parameters

Parameter Description Default Value Notes
/global/repository Repository URL for cnDBTier images. docker_repo:5000/occne Change it to the path of your docker registry. For example, occne-repo-host:5000/occne.
/global/siteid The ID of the CNE cluster site you are going to install. 1

Each site must be assigned with a unique site identifier. Site ID must be assigned as 1,2,3, and 4 for site1,site2,site3, and site4 respectively.

For example:
  • For single site, the site ID must be given as 1.
  • For a two site replication setup, the site IDs must be given as 1 and 2 for site 1 and site 2 respectively.
  • For a three site replication setup, the site IDs must be given as 1, 2, and 3 for site 1, site 2, and site 3 respectively.
  • For a four site replication setup, the site IDs must be given as 1, 2, 3 and 4 for site 1, site 2, site 3, and site 4 respectively.
/global/sitename The name of the CNE cluster site you are going to install. cndbtiersitename This parameter must be set to the name of the current cluster.
/global/image/tag Indicates the docker image version of mysqlndbcluster used for launching MySQL NDB cluster. 23.4.7 Change this parameter to the version of the docker image. For example, 23.4.7.
/global/image/imagePullPolicy The image pull policy for the cnDBTier helm chart. IfNotPresent NA
/global/mgmReplicaCount Count of the MySQL management nodes created in the MySQL NDB cluster. 2 This parameter defines the number of management nodes in the cluster.
/global/ndbReplicaCount Count of the MySQL data nodes created in the MySQL NDB cluster. 4 This parameter must be set to an even value. For example, 2, 4, 6.
/global/ndb/retainbackupno The maximum number of backups that is retained in the cnDBTier cluster at any point in time. 3 NA
/global/apiReplicaCount Count of the MySQL nodes that are participating in georeplication created in the MySQL NDB cluster. 2

This parameter defines the number of SQL nodes that are participating in georeplication.

In case of standalone site (one site without replication), no SQL nodes are required. In this case, set this parameter to 0.

In case of two-site replication, the minimum SQL nodes required is 2.

In case of three-site replication, the minimum SQL nodes required is 4.

/global/ndbappReplicaCount Count of the MySQL nodes that are not participating in georeplication created in the MySQL NDB cluster. 2 This parameter defines the number of SQL nodes in the cluster that will be used by the NF's and not participating in georeplication.
/global/ndbappReplicaMaxCount Maximum count of the MySQL non- georeplication SQL nodes that can be automatically scaled by the horizontal pod autoscaler. 4 This value should be greater than '/gobal/ndbappreplicaCount'.
/global/autoscaling/ndbapp/enabled Indicates if autoscaling is enabled or disabled for the non-georeplication SQL pods. false Set this parameter to true if you want to enable autoscaling for the non-georeplication SQL pods, otherwise set it to false. If set to true, cnDBTier needs Service Account created for autoscaling. Either enable "/global/serviceAccount/create" configuration or provide an already existing Service Account name in "/global/serviceAccount/name" configuration.
/global/domain The cluster name of Kubernetes cluster on which cnDBTier is deployed. cluster.local Set this parameter to the name of the Kubernetes cluster on which cnDBTier is installed. For example, occne1-cgbu-cne-dbtier.
/global/namespace The Kubernetes namespace in which the cnDBTier is deployed. occne-cndbtier NA
/global/storageClassName The storage class used for allocating the PV. occne-dbtier-sc

Storage class used.

By default occne-dbtier-sc is the storage class. It can be changed to any storage class name which is currently configured in the cluster.

/global/inframonitor/pvchealth/enable/all Indicates if pvc health monitoring is enabled for all the cnDBTier components. true When this parameter is set to true, the system enables pvc health monitoring for all the cnDBTier pods that are attached with pvc.
/global/inframonitor/pvchealth/enable/mgm Indicates if pvc health monitoring is enabled for the mgm pods. true When this parameter is set to true, the system enables pvc health monitoring for the mgm pods.
/global/inframonitor/pvchealth/enable/ndb Indicates if pvc health monitoring is enabled for the ndb pods true When this parameter is set to true, the system enables pvc health monitoring for the ndb pods.
/global/inframonitor/pvchealth/enable/api Indicates if pvc health monitoring is enabled the api pods true When this parameter is set to true, the system enables pvc health monitoring for the api pods.
/global/useasm Indicates if Aspen mesh service is enabled in the namespace or not. false Set this parameter to true if aspen mesh service is enabled in the namespace.
/global/tls/enable Indicates if Transport Layer Security (TLS) is enabled for replication. false Set this parameter to true if you want to enable TLS for replication.
/global/tls/caCertificate When TLS is enabled, this parameter provides the name of the CA certificate that is configured in the "cndbtier-trust-store-secret" secret. "" Use this parameter to configure the CA certificate when TLS is enabled.
/global/tls/tlsversion When TLS is enabled, this parameter defines the TLS version that must be used. TLSv1.3 Set this parameter to a valid TLS version that must be used for encrypting the connection between replication channels.
/global/tls/tlsMode When TLS is enabled, this parameter defines the TLS mode that must be used. VERIFY_CA Set this parameter to a valid TLS mode that must be used for replication.

For example: VERIFY_CA, VERIFY_IDENTITY, or NONE.

/global/tls/ciphers When TLS is enabled, this parameter defines the cipher that is used for replication. - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - TLS_AES_128_CCM_SHA256 List the valid TLS ciphers that can be used for replication. For the list of ciphers, the server selects the first one that it supports.
/global/tls/certificates[0]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-0.
/global/tls/certificates[0]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-0.
/global/tls/certificates[0]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-0.
/global/tls/certificates[0]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-0.
/global/tls/certificates[1]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-1.
/global/tls/certificates[1]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-1.
/global/tls/certificates[1]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-1.
/global/tls/certificates[1]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-1.
/global/tls/certificates[2]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-2.
/global/tls/certificates[2]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-2.
/global/tls/certificates[2]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-2.
/global/tls/certificates[2]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-2.
/global/tls/certificates[3]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-3.
/global/tls/certificates[3]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-3.
/global/tls/certificates[3]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-3.
/global/tls/certificates[3]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-3.
/global/tls/certificates[4]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-4.
/global/tls/certificates[4]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-4.
/global/tls/certificates[4]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-4.
/global/tls/certificates[4]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-4.
/global/tls/certificates[5]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-5.
/global/tls/certificates[5]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-5.
/global/tls/certificates[5]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-5.
/global/tls/certificates[5]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-5.
/global/useIPv6 Indicates if Kubernetes is used in IPv6 only or in dual-stack mode. false Set this parameter to true if you want cnDBTier to create a service account.
/global/useVCNEEgress Indicates if Kubernetes supports Egress and requires adding specific Egress annotations to cnDBTier. false Set this parameter to true if Kubernetes supports Egress and requires cnDBTier to add specific Egress annotations to cnDBTier helm chart.
/global/version The version of the cnDBTier helm chart. 23.4.7 This parameter is set to the current version of the cnDBTier helm chart.
/global/serviceAccount/create If set to true, then cnDBTier creates a service account. true NA
/global/serviceAccount/name If set to any value other than an empty string, cnDBTier either uses the same name while creating the service account or uses the existing service account with the given name. ""
  • If global.serviceAccount.create=true and global.serviceAccount.name="mysql-cluster-service-reader", then cndbtier creates a service account with name "mysql-cluster-service-reader". (If a Service Account with the name "mysql-cluster-service-reader" already exists, then it throws an error).
  • If global.serviceAccount.create=false and global.serviceAccount.name="mysql-cluster-service-reader", then cndbtier uses the already existing service account with name "mysql-cluster-service-reader". (If a service account with the name "mysql-cluster-service-reader" does not exist, then it throws an error).
/global/serviceAccountForUpgrade/create If the value is set to true, then the system creates a service account that is used by the post-upgrade and pre-upgrade hook during the cnDBTier upgrade. true Set this parameter to true if you want to create a service account for upgrade.
/global/serviceAccountForUpgrade/name Indicates the name of the service account. "cndbtier-upgrade-serviceaccount"
  • If global.serviceAccountForUpgrade.create=true and global.serviceAccountForUpgrade.name="" then cnDBtier creates a service account with default name.
  • If global.serviceAccountForUpgrade.create=true and global.serviceAccountForUpgrade.name="<name>" then cnDBtier creates a service account with given name
  • If global.serviceAccountForUpgrade.create=false and global.serviceAccountForUpgrade.name="<name>" then cnDBtier will not create a service account and will assume there is a existing service account with <name> which it will use for upgrade operation.
  • If global.serviceAccountForUpgrade.create=false and global.serviceAccountForUpgrade.name="" then cnDBtier will not create a service account and will not use any service account for upgrade.
/global/automountServiceAccountToken If set to true, then the system mounts the token of default service account to pods. false If this parameter is set to false, the token of the default service account will not be mounted on the pods.

If this parameter is set to true, the token of the default service account will be mounted on the pods.

Note: This can cause a security issue because the token of the default service account is mounted to the pod." Because, 'secret' word has not been used before in this description.

/global/prometheusOperator/alerts/enable Indicates if the Prometheus alert rules are loaded to the prometheusOperator environment automatically or manually. false
  • Set the value to true, if you want to load Prometheus alert rules automatically.
  • Set the value to false, if you want to load the Prometheus alert rules manually or use Prometheus configmap.
/global/multus/enable Indicates if multus support for cnDBTier is enabled. false Set this parameter to true if you want to support multus with cnDBTier.
/global/multus/serviceAccount/create If this parameter is set to true, the system creates the service account needed for multus. true Set this parameter to false if you do not want to create the service account needed for multus.
/global/multus/serviceAccount/name If a service account name is defined using this parameter, then the system creates the service account using the given name. If service account is already created, then the system uses the same service account. "cndbtier-multus-serviceaccount" If the serviceAccount.create is true and the name is given, then the service account is created with the given name. If the serviceAccount.create is false and the name is given, then the cnDBTier assumes that the service account is already created and it uses the same service account instead of creating a new one.
/global/networkpolicy/enabled Indicates if network policy is enabled for cnDBTier. false Set this parameter to true if you want to enable network policy.
/global/backupencryption/enable Indicates if backup encryption is enabled. If this parameter is set to true, the system encrypts all the backups using the encryption password stored in the occne-backup-encryption-secret file. true Set this parameter to true if you want to encrypt the backups.

Before setting this parameter to true, follow either step 1 or step 5 of Creating Secrets to create the backup encryption secret.

/global/remotetransfer/enable Indicates if secure transfer of backup is enabled. If this parameter is set to true, then the system transfers all the backups including the routine and on-demand backups to a remote server. However, the system doesn't transfer the georeplication recovery backups. false Set this parameter to true if you want to secure the transfer of backups to a remote server.
/global/remotetransfer/faultrecoverybackuptransfer Indicates if georeplication recovery backups are transferred to a remote server. true This parameter has no effect if /global/remotetransfer/enable is set to false.

If this parameter is set to true along with /global/remotetransfer/enable, then the system transfers all the georeplication recovery backups to a remote server.

If /global/remotetransfer/enable is set to true and this parameter is set to false, the system doesn't transfer any of the georeplication recovery backups to a remote server.

/global/remotetransfer/remoteserverip The IP address of the remote server where the backups are transferred. "" Configure this parameter with the IP address of the remote SFTP server where backups are to be transferred and stored in a ZIP format.
/global/remotetransfer/remoteserverport The port number of the remote SFTP server where the backups are transferred. "" Configure this parameter with the port number of the remote SFTP server where backups are to be transferred and stored in a ZIP format.
/global/remotetransfer/remoteserverpath The path in the remote SFTP server where the backups are stored. "" Configure this parameter with the path (location) in the remote SFTP server where backups are to be transferred and stored in a ZIP format.
/global/k8sResource Kubernetes resource added as a prefix or a suffix. ""

By default, no prefix or suffix is added.

You can use this parameter to add any prefix, suffix, or both to the container or to the pod.

/global/https/enable Indicates whether https mode is enabled or disabled false Set this parameter to true if you want to use https instead of http, follow steps mentioned in Creating Secrets before making this true.
/global/encryption/enable Indicates whether the passwords stored in the database can be encrypted. false Set this parameter to true if user wants to encrypt passwords stored in Database, follow steps mentioned in Creating Secrets before making this true.
/global/commonlabels Indicates the common labels for all containers. "" Add the value for this attribute, if you want to add Common labels for all containers.
/global/use_affinity_rules Indicates whether or not to use affinity rules. true Turn the affinity rules off, when MySQL cluster runs on small Kubernetes clusters with less than the required Kubernetes nods, for testing in some small systems.
/global/ndbconfigurations/mgm/startNodeId Starting node ID used for the mgm pods. 49 For example, if the startNodeId is 49, then the first mgm pod node ID is 49 and the second mgm pod node ID is 50.
/global/ndbconfigurations/mgm/HeartbeatIntervalMgmdMgmd Specifies the heartbeat interval time between mgm nodes (in milliseconds). 2000 Specify the interval between heartbeat messages that is used to determine whether another management node is in contact with the current one.
/global/ndbconfigurations/mgm/TotalSendBufferMemory Specifies the total amount of memory allocated on the node for shared send buffer memory among all configured transporters. 16M NA
/global/ndbconfigurations/ndb/MaxNoOfAttributes Specifies the recommended maximum number of attributes that can be defined in the cluster. 5000 Set the value of this parameter as per your requirement. For example:5000
/global/ndbconfigurations/ndb/MaxNoOfOrderedIndexes Specifies the total number of ordered indexes that can be in use in the system at a time. 1024 NA
/global/ndbconfigurations/ndb/NoOfFragmentLogParts Specifies the number of log file groups for redo logs. 4 Set this parameter to the required number of log file groups for redo logs. For example: 4
/global/ndbconfigurations/ndb/MaxNoOfExecutionThreads Specifies the number of execution threads used by ndbmtd. 8 The value of this parameter can range between 2 and 72.
/global/ndbconfigurations/ndb/StopOnError Specifies whether a data node process must exit or perform an automatic restart when an error condition is encountered. 0 By default, the data node process is configured to perform an automatic restart on encountering an error condition.

Set this parameter to 1 if you want the data node process to halt and exit.

/global/ndbconfigurations/ndb/MaxNoOfTables The recommended maximum number of table objects for a cluster. 1024 NA
/global/ndbconfigurations/ndb/NoOfFragmentLogFiles The number of REDO log files for the node. 128 Set this parameter to the required number of REDO log files for the node. Fir example: 128.
/global/ndbconfigurations/api/max_connections The maximum number of simultaneous client connections allowed. 4096 NA
/global/ndbconfigurations/api/wait_timeout The number of seconds the server waits for an activity on a non-interactive connection before closing it. 600 NA
/global/ndbconfigurations/api/interactive_timeout The number of seconds the server waits for an activity on an interactive connection before closing it. 600 NA
/global/ndbconfigurations/api/all_row_changes_to_bin_log

Enabling or disabling the binlogs in ndbmysqld pods, if single site is deployed then binlogs can be disabled.

1- Enable the Binlogs

0-Disabling the binlogs.

1 Use this parameter to enable or disable the binlogs. In single site deployments, it can be used for disabling the binlogs.
/global/ndbconfigurations/api/binlog_expire_logs_seconds binlog expiry in seconds. 86400 Expiry time in seconds for the binlogs in ndbmysqld pods.
/global/ndbconfigurations/api/auto_increment_increment This parameter controls the operation of the AUTO_INCREMENT columns to avoid the auto-increment key collisions. 4 Value of this parameter should be equal to the number of replication sites. If installing 2 site replication, set it to 2 and update the other cnDBTier cluster. If installing 3 site replication, set it to 3 and update the other two cnDBTier clusters. If installing 4 site replication, set it to 4 and also update the other three cnDBTier Clusters.
/global/ndbconfigurations/api/auto_increment_offset This parameter controls the operation of the AUTO_INCREMENT columns to avoid the auto-increment key collisions. 1 Each site should be assigned the unique auto-increment offset value. If installing cnDBTier Cluster1, set it to 1. If installing cnDBTier Cluster2, set it to 2. If installing cnDBTier Cluster3, set it to 3. If installing cnDBTier Cluster4, set it to 4.
/global/additionalndbconfigurations/mysqld/binlog_cache_size This parameter is use to define the size of the memory buffer to hold the changes made to the binary log during a transaction. 10485760 NA
/global/additionalndbconfigurations/ndb/CompressedBackup This parameter is used to enable the backup compression in each of the data nodes. true If this parameter is set to true, the backups in each of the data nodes are compressed.

If this parameter is set to false, the backups in each of the data nodes are not compressed.

/global/additionalndbconfigurations/mysqld/ndb_batch_size This parameter is used to set the size (in bytes) for NDB transaction batches. 2000000 Set the size in bytes that is used for NDB transaction batches.
/global/additionalndbconfigurations/mysqld/ndb_blob_write_batch_bytes This parameter is used to set the size (in bytes) for batching of BLOB data writes. 2000000 Set the size in bytes for batching of BLOB data writes.
/global/additionalndbconfigurations/mysqld/slave_allow_batching

Indicates whether or not batched updates are enabled on NDB Cluster replicas.

ON- allows the batched updates.

OFF- does not allow the batched updates.

ON NA
/global/additionalndbconfigurations/mysqld/replica_parallel_workers Enables Multi Threaded Applier (MTA) on the replica and sets the number of applier threads for running the replication transactions in parallel. 0 This value must be set greater than zero to enable MTA.
/global/additionalndbconfigurations/mysqld/binlog_cache_size Indicates the size of the memory buffer to hold changes of the binary log during a transaction. 10485760 NA
/global/additionalndbconfigurations/ndb/TimeBetweenWatchDogCheck This parameter specifies the time interval (in milliseconds) between the watchdog checks. 800 If a process remains in the same state after three watchdog checks, the watchdog thread terminates the process.
/global/api/startNodeId Starting node ID used for the SQL georeplication pods. 56 For example, if the startNodeId is 56, then the first georeplication SQL pod node ID is 56, and the second georeplication SQL pod node ID is 57.
/global/api/general_log Indicates if general query log is enabled or disabled. ON Set this parameter to OFF to disable general query log.
/global/ndbapp/ndbdisksize Allocated disk size for ndbapp pods. 20Gi Disk allocation size for the ndbapp pods.
/global/ndbapp/startNodeId Starting node ID used for the SQL non georeplication pods. 70 For example, if the startNodeId is 70, then the first non georeplication SQL pod node ID is 70 and the second non georeplication SQL pod node ID is 71.
/global/ndb/datamemory Data memory size. 12G The size of each data node data memory.
/global/ndb/use_separate_backup_disk Indicates whether to use the default backup URI for storing the DB backup files. true Used in conjunction with the separateBackupDataPath variable when set to true, if there is a need to specify a separate disk path to store DB backup files.
/global/replicationskiperrors/enable Indicates if replicationerrornumbers is skipped. true Set this parameter to true if you want to skip the configured replication errors, when the replica in all the replication channels with the remote site, encounters configured error in its replica status.
/global/replicationskiperrors/numberofskiperrorsallowed Indicates the number of times the errors can be skipped in the configured time window. 5 Set this parameter to the desired number of times you want to skip the configured replication error.
/global/replicationskiperrors/skiperrorsallowedintimewindow The time interval within which the configured number of allowed skip errors can be skipped. 3600 Set this value to the desired time window value (in seconds) within which you want the replication error to be skipped for the configured number times (numberofskiperrorsallowed).

If replication skip error (replicationskiperrors) is enabled, then replication errors are skipped for the configured number of times (numberofskiperrorsallowed) within 3600 seconds.

/global/replicationskiperrors/epochTimeIntervalLowerThreshold The lower epoch time interval threshold value. 10000

Set this parameter to the lowest value from which the replication errors are skipped if the calculated epoch interval is greater than desired value.

If the calculated epoch interval that need to be skipped is more than the configured threshold, a minor alert is raised. However, this does not decide whether the replication errors can be skipped or not.
/global/replicationskiperrors/epochTimeIntervalHigherThreshold The higher epoch time interval threshold value. 80000

Set this to the desired value beyond which replication error should not be skipped if calculated epoch interval is greater than desired value.

/global/replicationskiperrors/replicationerrornumbers The list of replication errors that must be skipped, when all the replication channels with the remote site, encounters error in its replica status. - errornumber: 13119

- errornumber: 1296

- errornumber: 1007

- errornumber: 1008

- errornumber: 1050

- errornumber: 1051

If you want to add more error numbers, add the elements in the following manner:
  • - errornumber: 13119
  • - errornumber: 1296
  • - errornumber: XYZ

Note: Replce XYZ in the sample with the error number.

/global/ndb/KeepAliveSendIntervalMs Time between the keep-alive signals on the links between the data nodes (in milliseconds). 60000 The default is 60000 milliseconds (one minute).

Setting this value to 0, disables the keep-alive signals.

Values from 1 to 10 are treated as 10.

/global/mgm/ndbdisksize Allocated disk size for the management node. 15Gi Size of the PVC attached to the management pods.
/global/services/ipFamilyPolicy Kubernetes ipFamilyPolicy for all the services. SingleStack This value must always be set to SingleStack.

It is recommended to not change the default value.

/global/services/primaryDualStackIpFamily Sets the Primary IP Family for all of the services. The Primary IP Family indicates the first IP Family value in the ipFamilies array. IPv6 This value must always be set to IPv6.

It is recommended to not change the default value.

/global/ndb/ndbdisksize Allocated disk size for the data node. 60Gi Size of the PVC attached to each data pod for storing the ndb data.
/global/ndb/ndbbackupdisksize Allocated backup disk size for the DB backup service. 60Gi Size of the PVC attached to each data pod for storing the backup data.
/global/ndb/restoreparallelism Indicates the number of parallel transactions to use while restoring data. 128 NA
/global/multiplereplicationgroups/enabled Indicates if multiple replication channel groups are disabled or enabled. false Set this value to true to enable multiple replication channel groups.
/global/multiplereplicationgroups/replicationchannelgroups Defines the list of replication channel groups. List of replication channel groups. NA
/global/multiplereplicationgroups/replicationchannelgroups/channelgroupid Replication channel group identifier for each replication channel. 1 Channel group identifier for replication channel group.

Valid values: 1,2

/global/multiplereplicationgroups/replicationchannelgroups/binlogdodb List of databases that will be logged in to binary logs of the replication SQL nodes for replicating the data to remote site using these replication channels. {} Replication SQL nodes belonging to this replication channel group records the writes on these databases in their binary logs.
/global/multiplereplicationgroups/replicationchannelgroups/binlogignoredb List of databases that will be not be logged in to binary logs of the replication SQL nodes for this replication channel group. {} Replication SQL nodes belonging to this replication channel group do not record the writes on these databases in their binary logs.
/global/multiplereplicationgroups/replicationchannelgroups/binlogignoretables List of the tables that are not logged in to binary logs of the replication SQL nodes, for the replication channel group. {} Replication SQL nodes that belong to the replication channel group do not record the writes on the tables listed in this parameter, in its binary logs.
/global/multiplereplicationgroups/replicationchannelgroups/sqllist List of SQL nodes that belong to this replication channel group identifier. {} By default, the SQL nodes are configured to each replication channel group. If the SQL nodes of each replication channel group are configured differently in replication service deployments, then this list must be specified.
/global/api/ndbdisksize Allocated disk size for the api node. 100Gi Size of the PVC attached to each SQL or API pod for storing the SQL data and the binlog data.
/global/api/useRamForRelaylog Indicates if RAM is used for storing the relay logs. false When this parameter is set to true, the system creates a disk using the RAM where relay logs are stored.
/global/api/relayLogDiskSize The size of the disk created for storing the relay logs using the RAM in replication SQL pods (ndbmysqld pods). 4Gi If /global/api/useRamForRelaylog is set to true, the memory resources for the replication SQL pods (ndbmysqld) must be increased as per disk size configured in this parameter.
For example, if /global/api/useRamForRelaylog is set to true and the disk size is set to 4Gi, the following memory resources for the replication SQL pods must be increased by relayLogDiskSize (that is 4Gi):
  • .Values.api.resources.limits.memory
  • .Values.api.resources.limits.memory
/global/api/startEmptyApiSlotNodeId Starting Node ID to be used while performing the auto disaster recovery procedure pods 222 NA
/global/api/numOfEmptyApiSlots Number of empty API slots added to the cnDBTier Cluster that are used while restoring the cndbtier cluster. 4 NA

3.2 Management Parameters

The following table provides a list of management parameters.

Table 3-2 Management Parameters

Parameter Description Default Value Notes
/mgm/resources/limits/cpu Max limited CPU count allocated for the management node. 4 Maximum amount of CPU that Kubernetes allows the management pod to use.
/mgm/resources/limits/memory Max limited memory size allocated for the management node. 10Gi Memory limits for each management pod.
/mgm/resources/limits/ephemeral-storage Max limit ephemeral storage size allocated for the management node 1Gi Ephemeral storage Limits for each management pod
/mgm/resources/requests/cpu Indicates the required CPU count allocated for the management node. 4 CPU allotment for each management pod.
/mgm/resources/requests/memory Indicates the required memory size allocated for the management node. 8Gi Memory allotment for each of the management pod.
/mgm/resources/requests/ephemeral-storage Indicates the required Ephemeral storage size allocated for the management node 90Mi Ephemeral storage allotment for each of the management pod
/mgm/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the mgm pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the mgm pods to be scheduled on worker node with label nodetype=mgm, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: mgm

nodeSelector is disabled if this parameter is passed empty.

/mgm/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/mgm/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/mgm/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the mgm pods must be scheduled.
For example, if you want the mgm pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/mgm/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/mgm/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where mgm pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the mgm pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - mgm
    - mgmd

In this case, more preference is given to the worker nodes with label matching to node type mgm or mgmd as this rule has a greater value for weight.

3.3 Helm Test Parameters

The following table provides a list of Helm test parameters.

Table 3-3 Helm Test Parameters

Parameter Description Default Value Notes
/test/image/repository Docker image name of MySQL NDB client cndbtier-mysqlndb-client Change it to the actual docker image name on your docker registry respectively. For example, cndbtier-mysqlndb-client
/test/image/tag Docker image tag of the MySQL NDB client 23.4.7 Change it to the actual version of the docker image respectively. For example, 23.4.7.
/test/resources/limits/ephemeral-storage Max limit ephemeral storage size allocated for the management test node 1Gi Ephemeral storage Limits for each management test nodeConnection.
/test/resources/requests/ephemeral-storage Indicates the required ephemeral storage size allocated for the management test node 90Mi Ephemeral storage allotment for each of the management test nodeConnection.
/test/statusCheck/replication/enable Indicates the helm test for db-replication-svc is enabled true If you want to perform helm test on db-replication-svc for checking sql database is accessible or not, set it to true.
/test/statusCheck/monitor/enable Indicates the helm test for db-monitor-svc true If you want to perform helm test on db-monitor-svc for checking db-monitor is healthy or not, set it to true.

3.4 NDB Parameters

The following table provides a list of network database (NDB) parameters.

Table 3-4 NDB Parameters

Parameter Description Default Value Notes
/ndb/sidecar/image/tag Version for the docker image of db backup service installed as a sidecar for managing automatic DB backups. 23.4.7 Change it to the version of the docker registry. For example, 23.4.7.
/ndb/sidecar/resources/limits/ephemeral-storage Maximum limit ephemeral-storage size allocated for the db backup service, installed as a sidecar for managing automatic DB backups. 1Gi  
/ndb/sidecar/resources/requests/ephemeral-storage Required ephemeral- storage size allocated for the DB backup service, installed as a sidecar for managing automatic DB backups. 90Mi  
/ndb/ndbWaitTimeout Indicates the time the ndbmtd pods wait for the ndb_mgmd pods to come online. 600 The maximum time the ndbmtd pods wait for the ndb_mgm pods to come online.
/ndb/resources/limits/cpu Maximum limit on CPU count allocated for the data node. 10 Maximum amount of CPU that Kubernetes allows the data pod to use.
/ndb/resources/limits/memory Maximum limit on memory size allocated for the data node. 18Gi Memory limits for each data pod.
/ndb/resources/limits/ephemeral-storage Indicates the maximum limit of the ephemeral storage that can be allocated for the data node. 1Gi  
/ndb/resources/requests/cpu Indicates the required CPU count allocated for the data node. 10 CPU allotment for each data pod.
/ndb/resources/requests/memory Indicates the required memory size allocated for the data node. 16Gi Memory allotment for each of the data pod.
/ndb/resources/requests/ephemeral-storage Indicates the required Ephemeral storage size allocated for the data node 90Mi Ephemeral storage allotment for each of the data pod
/ndb/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the ndb pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the ndb pods to be scheduled on worker node with label nodetype=ndb, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: ndb

nodeSelector is disabled if this parameter is passed empty.

/ndb/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/ndb/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/ndb/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the ndb pods must be scheduled.
For example, if you want the ndb pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/ndb/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/ndb/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where ndb pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the ndb pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - ndb
    - ndbmtd

In this case, more preference is given to the worker nodes with label matching to node type ndb or ndbmtd as this rule has a greater value for weight.

3.5 API Parameters

The following table provides a list of Application Programming Interface (API) parameters.

Table 3-5 API Parameters

Parameter Description Default Value Notes
/api/resources/limits/cpu Maximum limit on the CPU count allocated for the api node. 8 Maximum amount of CPU that Kubernetes allows the SQL or API pod to use.
/api/resources/limits/memory Maximum limit on the memory size allocated for the api node. 10Gi Memory limits for each SQL or API pod.
/api/resources/limits/ephemeral-storage Maximum limit ephemeral-storage size allocated for the api node. 1Gi Ephemeral Storage Limits for each SQL or API pod.
/api/resources/requests/cpu Required CPU count allocated for the api node. 8 CPU allotment for each SQL or API pod.
/api/resources/requests/memory Required memory size allocated for the api node. 10Gi Memory allotment for each of the SQL or API pod.
/api/resources/requests/ephemeral-storage Required ephemeral-storage size allocated for the api node. 90Mi Ephemeral Storage allotment for each of the SQL or API pod.
/api/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the api pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the api pods to be scheduled on worker node with label nodetype=api, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: api

nodeSelector is disabled if this parameter is passed empty.

/api/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/api/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/api/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key 
 keyvalues: 
   - customvalue1   
   - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the ndbmysqld API pods must be scheduled.
For example, if you want the ndbmysqld API pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/api/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/api/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where ndbmysqld API pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the ndbmysqld pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - api
    - ndbmysqld

In this case, more preference is given to the worker nodes with label matching to node type api or ndbmysqld as this rule has a greater value for weight.

/api/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/api/externalService/sqlgeorepsvclabels[0]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-0. "" Configure the LoadBalancer IP for ndbmysqldsvc-0.
/api/externalService/sqlgeorepsvclabels[0]/annotations Annotations for ndbmysqldsvc-0 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-0 LoadBalancer service.
/api/externalService/sqlgeorepsvclabels[1]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-1 "" Configure the LoadBalancer IP for ndbmysqldsvc-1
/api/externalService/sqlgeorepsvclabels[1]/annotations Annotations for ndbmysqldsvc-1 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-1 LoadBalancer service.
/api/externalService/sqlgeorepsvclabels[2]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-2 "" Configure the LoadBalancer IP for ndbmysqldsvc-2
/api/externalService/sqlgeorepsvclabels[2]/annotations Annotations for ndbmysqldsvc-2 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-2 LoadBalancer service.
/api/externalService/sqlgeorepsvclabels[3]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-3. "" Configure the LoadBalancer IP for ndbmysqldsvc-3
/api/externalService/sqlgeorepsvclabels[3]/annotations Annotations for ndbmysqldsvc-3 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-3 LoadBalancer service.
/api/connectivityService/multus/enable If this is enabled, connectivity service end points are generated from the multus IPs false Set this to true if the:
  • multus is enabled globally
  • SQL pods are configured with the multus annotation
  • connectivity service needs to be created from the multus IPs
/api/connectivityService/multus/networkAttachmentDefinationTagName Provide the NAD file name for the connectivity service. "" Give the name of the NAD, which the connectivity service will use to get the multus IP from the SQL pods.
/api/ndbWaitTimeout Indicates the time the ndbmtd pods wait for the ndb_mgmd pods to come online. 600 The maximum time the ndbmtd pods wait for the ndb_mgm pods to come online.
/api/waitforndbmtd Indicates whether the ndbmtd pod waits for the mgm pods to come online before starting its process. true Boolean value representing whether the ndbmtd pod waits for the mgm pods to come online before starting its process.
/api/initsidecar/image/repository Name of the docker image of MySQL NDB client. cndbtier-mysqlndb-client Change it to the docker image name on your docker registry, for example, cndbtier-mysqlndb-client.
/api/initsidecar/image/tag Version for the docker image of MySQL NDB client. 23.4.7. Change it to the version of the docker image. For example, 23.4.7.
/api/initSidecarResources/limits/ephemeral-storage Maximum limit ephemeral-storage size allocated for the mysqlndbclient. 1Gi Ephemeral Storage Limits for mysqlndbclient.
/api/initSidecarResources/requests/ephemeral-storage Required ephemeral-storage size allocated for the mysqlndbclient. 90Mi Ephemeral Storage allotment for mysqlndbclient.
/api/ndbapp/connectivityService/usendbappselector This selector determines if the connectivity svc points to non georeplication pods only or all SQL pods. true Change the value to true, if you want the connectivity service to point to the non georeplication pods only. The false value indicates the connectivity SVC to point to all SQL pods.
/api/ndbapp/resources/limits/cpu Maximum CPU count limit allocated for the SQL or API node not participating in georeplication. 8 Maximum amount of CPU that Kubernetes allows the SQL or API node, that is not participating in georeplication, to use.
/api/ndbapp/resources/limits/memory Maximum memory size limit allocated for the SQL or API node not participating in georeplication. 10Gi Memory limit for each SQL or API node not participating in georeplication.
/api/ndbapp/resources/limits/ephemeral-storage Maximum limit ephemeral-storage size allocated for the SQL or API node not participating in georeplication. 1Gi Ephemeral storage Limits for each SQL or node not participating in geo replication.
/api/ndbapp/resources/requests/cpu Required CPU count allocated for the API or SQL node not participating in georeplication. 8 CPU allotment for each SQL or API node not participating in georeplication.
/api/ndbapp/resources/requests/memory Required memory size allocated for the API or SQL node not participating in georeplication. 10Gi Memory allotment for each of the SQL or API node not participating in georeplication.
/api/ndbapp/resources/requests/ephemeral-storage Required Ephemeral storage size allocated for the SQL or API node not participating in georeplication. 90Mi Ephemeral storage allotment for each of the SQL or API node not participating in georeplication.
/api/ndbapp/horizontalPodAutoscaler/memory/enabled Enable horizontal pod autoscaling on the basis of memory consumption. true If enabled, then the horizontal pod autoscaling is done on the basis of memory consumption of the ndbappmysqld pods.
/api/ndbapp/horizontalPodAutoscaler/memory/averageUtilization Defines the percentage of average memory utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling. 80 Defines the percentage of average memory utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling.
/api/ndbapp/horizontalPodAutoscaler/cpu/enabled Enable horizontal pod autoscaling on basis of CPU consumption. false If enabled, then the horizontal pod autoscaling is done on the basis of CPU consumption of the mysqldndbapp pods.
/api/ndbapp/horizontalPodAutoscaler/cpu/averageUtilization States the percentage of average CPU utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling. 80 Defines the percentage of average CPU utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling.
/api/ndbapp/externalconnectivityService/loadBalancerIP Fixed LoadBalncer IP for mysql-external-connectivity-service. "" Configure the LoadBalancer IP for mysql-external-connectivity-service.
/api/ndbapp/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the ndbapp pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the ndbapp pods to be scheduled on worker node with label nodetype=ndbapp, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: ndbapp

nodeSelector is disabled if this parameter is passed empty.

/api/ndbapp/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/api/ndbapp/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/api/ndbapp/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the ndbappmysqld API pods must be scheduled.
For example, if you want the ndbappmysqld API pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/api/ndbapp/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/api/ndbapp/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where ndbappmysqld pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the ndbappmysqld pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - api
    - ndbappmysqld

In this case, more preference is given to the worker nodes with label matching to node type api or ndbappmysqld as this rule has a greater value for weight.

3.6 DB Replication Service Parameters

The following table provides a list of database replication service parameters.

Table 3-6 DB Replication Service Parameters

Parameter Description Default Value Notes
/db-replication-svc/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the db-replication-svc pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the db-replication-svc pods to be scheduled on worker node with label nodetype=replsvc, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: replsvc

nodeSelector is disabled if this parameter is passed empty.

/db-replication-svc/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/db-replication-svc/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/db-replication-svc/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the backup manager service pods must be scheduled.
For example, if you want the replication service pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/db-replication-svc/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/db-replication-svc/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where replication service pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the replication service pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - dbtierservice
    - replication

In this case, more preference is given to the worker nodes with label matching to node type dbtierservice or replication as this rule has a greater value for weight.

/db-replication-svc/dbreplsvcdeployments[0]/name Name of the replication service combination of site name and mate site name. cndbtiersitename-cndbtierfirstmatesitename-replication-svc

Replace <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_MATE_SITE_NAME you used.

For example: cndbtiersitename-cndbtierfirstmatesitename-replication-svc

/db-replication-svc/dbreplsvcdeployments[0]/enabled Set this parameter to true if you want the leader replication service to be enabled. false Set this parameter to true if you want the replication service pod to exist.

Note: This parameter must be set to true if secure transfer of backup to remote server is enabled or if you want to enable replication across multiple sites.

/db-replication-svc/dbreplsvcdeployments[0]/multus/enable Set it to true if you want to use the multus IP to communicate remote site rather than the loadbalancer IP. false If given true then the replication svc from the local site will communicate to the remote site using the multus IP rather than the load balancer IP.
/db-replication-svc/dbreplsvcdeployments[0]/multus/networkAttachmentDefinationTagName Provide the name of the NAD file name which has been given as pod annotation to the replication pod. "" Give the same Network attachment defination file name which has been given as pod annotation to the replication deployment.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primaryhost The CLUSTER-IP from the ndbmysqldsvc-0 LoadBalancer service. ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-0.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-0 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-0 with cnc-db-prod-ndbmysqld-0.

/db-replication-svc/dbreplsvcdeployments[0]/mysql/primarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-0 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-0 service, which is used for the establishing the primary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primaryhostserverid The unique SQL server id for the primary sql node. 1000 For the primary SQL node of site 1, set it to 1000. For the primary SQL node site 2, set it to 2000. For the primary SQL node site 3, set it to 3000.
Calculate the server ID using the following formula:
server_id = siteid * 1000 + ndbmysqld_pod_index
For exmaple, if site ID = 1, then:
  • server ID for ndbmysqld-0 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 0 = 1000
  • server ID for ndbmysqld-1 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 1 = 1001
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondaryhost The CLUSTER-IP from the ndbmysqldsvc-1 LoadBalancer service. ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-1.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-1 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-1 with cnc-db-prod-ndbmysqld-1.

/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-1 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-1 service, which is used for the establishing the secondary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondaryhostserverid The unique SQL server id for the secondary sql node. 1001 For the primary SQL node of site 1, set it to 1001. For the primary SQL node site 2, set it to 2001. For the primary SQL node site 3, set it to 3001.
Calculate the server ID using the following formula:
server_id = siteid * 1000 + ndbmysqld_pod_index
For exmaple, if site ID = 1, then:
  • server ID for ndbmysqld-0 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 0 = 1000
  • server ID for ndbmysqld-1 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 1 = 1001
/db-replication-svc/dbreplsvcdeployments[0]/replication/localsiteip Local site replication service external IP assigned to the replication service. "" EXTERNAL-IP ip address assigned to the <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc in current Site.
/db-replication-svc/dbreplsvcdeployments[0]/replication/localsiteport Local site port for the current site that is being installed. "80"  
/db-replication-svc/dbreplsvcdeployments[0]/replication/channelgroupid Replication channel group ID of the replication service pod that handles the configuration and monitoring of the replication channels of the primary and secondary SQL nodes which belong to this group ID. 1 Channel group identifier of the replication channel group.

Valid values: 1, 2

/db-replication-svc/dbreplsvcdeployments[0]/mysql/primarysignalhostmultusconfig/networkAttachmentDefinationTagName Set the NAD file name which will be used to identify the multus IP from the ndbmysqld pods and then use it to set to the primary replication channel. "" If set then cnDBTier will be using the same name to identify the multus IP from the ndbmysqld pods and use the same IP for the setting up the primary replication channel.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primarysignalhostmultusconfig/multusEnabled If set to true then the primary replication channel will be setup using the multus IP. false Set it to true if you want the primary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondarysignalhostmultusconfig/networkAttachmentDefinationTagName Set the NAD file name which will be used to identify the multus IP from the ndbmysqld pods and then use it to set to the secondary replication channel. "" If set then cnDBTier will be using the same name to identify the multus IP from the ndbmysqld pods and use the same IP for the setting up the secondary replication channel.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondarysignalhostmultusconfig/multusEnabled If set to true then the secondary replication channel will be setup using the multus IP. false Set it to true if you want the secondary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[0]/replication/matesitename The mate site for the current site that is being installed. cndbtierfirstmatesitename

replace <${OCCNE_MATE_SITE_NAME}> with the OCCNE_MATE_SITE_NAME you used.

For example, cndbtierfirstmatesitename.

/db-replication-svc/dbreplsvcdeployments[0]/replication/remotesiteip The mate site replication service external IP for establishing georeplication. ""

For deploying cndbtier site1, use ""; For deploying cndbtier site2, use EXTERNAL-IP from site1 occne-db-replication-svc LoadBalancer service.

For deploying cndbtier site3 use EXTERNAL-IP from site1 occne-db-replication-svc LoadBalancer service.

Use the value from the OCCNE_MATE_REPLICATION_SVC Environment variable.

/db-replication-svc/dbreplsvcdeployments[0]/replication/remotesiteport Mate site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[0]/pvc/name Name of the pvc which replication service uses for fault recovery. pvc-cndbtiersitename-cndbtierfirstmatesitename-replication-svc Replace pvc-<${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_MATE_SITE_NAME. For example: pvc-cndbtiersitename-cndbtierfirstmatesitename-replication-svc
/db-replication-svc/dbreplsvcdeployments[0]/pvc/disksize Size of the disksize which is used to store the backup retrieved from the remote site and data nodes. 8Gi Size of the PVC to store backup retrieved from the remote site and data nodes.
/db-replication-svc/dbreplsvcdeployments[0]/labels Provide specific pod labels apart from commonlabels. {} Set the labels for db-replication apart from commonlabels like in this format. For example: app-home: cndbtier
/db-replication-svc/dbreplsvcdeployments[0]/service/loadBalancerIP Fixed LoadBalncer IP for <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc "" Configure the LoadBalancer IP that must be assigned to <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc
/db-replication-svc/dbreplsvcdeployments[0]/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[1]/name Name of the replication service combination of site name and second mate site name. cndbtiersitename-cndbtiersecondmatesitename-replication-svc

replace <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_SECOND_MATE_SITE_NAME you used.

For example, chicago-pacific-replication-svc.

/db-replication-svc/dbreplsvcdeployments[1]/enabled Incase of 3 site replication second mate site exists for each site so enabled will be true. true

In case of 3 site replication second mate site exists for each site so enabled will be true.

In case of 2 site replication only one mate site exists so enabled will be false.

/db-replication-svc/dbreplsvcdeployments[1]/mysql/primaryhost The CLUSTER-IP from the ndbmysqldsvc-2 LoadBalancer service. ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-2.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-2 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-2 with cnc-db-prod-ndbmysqld-2.

/db-replication-svc/dbreplsvcdeployments[1]/mysql/primarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-2 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-2 service, which is used for the establishing the primary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/primaryhostserverid The unique SQL server id for the primary sql node. 1002 For site 1 primary sql node, set it to 1002, for site 2 primary sql node, set it to 2002, for site 3 primary sql node, set it to 3002.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondaryhost The CLUSTER-IP from the ndbmysqldsvc-3 LoadBalancer service. ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-3.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-3 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-3 with cnc-db-prod-ndbmysqld-3.

/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-3 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-3 service, which is used for the establishing the secondary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondaryhostserverid The unique SQL server id for the secondary sql node. 1003 For site 1 primary sql node, set it to 1003, for site 2 primary sql node, set it to 2003, for site 3 secondary sql node, set it to 3003.
/db-replication-svc/dbreplsvcdeployments[1]/replication/localsiteip Local site replication service external IP assigned to the replication service. "" EXTERNAL-IP ip address assigned to the <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc in current Site.
/db-replication-svc/dbreplsvcdeployments[1]/replication/localsiteport Local site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[1]/replication/channelgroupid Replication channel group ID of the replication service pod that handles the configuration and monitoring of the replication channels of the primary and secondary SQL nodes which belong to this group ID. 1 Channel group identifier of the replication channel group.

Valid values: 1, 2

/db-replication-svc/dbreplsvcdeployments[1]/replication/matesitename second mate site for the current site that is being installed. cndbtiersecondmatesitename

replace <${OCCNE_SECOND_MATE_SITE_NAME}> with the OCCNE_SECOND_MATE_SITE_NAMEyou used.

For example: pacific

/db-replication-svc/dbreplsvcdeployments[1]/replication/remotesiteip Mate site replication service external IP for establishing geo-replication. ""

If deploying cndbtier site1, use ""; if deploying cndbtier site2, use "".

If deploying cndbtier site3 use EXTERNAL-IP from site2 occne-db-replication-svc LoadBalancer service.

Use the value from the

SECOND_MATE_REPLICATION_SVC

Environment variable.

/db-replication-svc/dbreplsvcdeployments[1]/replication/remotesiteport Mate site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[1]/labels Provide specific pod labels apart from commonlabels. {} Set the labels for db-replication apart from commonlabels like in this format.

For example: app-home: cndbtier

/db-replication-svc/dbreplsvcdeployments[1]/service/loadBalancerIP Fixed LoadBalncer IP for <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc "" Configure the LoadBalancer IP that must be assigned to <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc
/db-replication-svc/dbreplsvcdeployments[1]/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[2]/name Name of the replication service, that is, a combination of site name and third mate site name. cndbtiersitename-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc

replace <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_THIRD_MATE_SITE_NAME that you used.

For example: chicago-redsea-replication-svc

/db-replication-svc/dbreplsvcdeployments[2]/enabled Incase of 4 site replication third mate site exists for each site so enabled will be true. false

In case of 4 site replication, third mate site exists for each site so enabled will be true.

In case of 3 site replication, only first mate site and second mate site exists so enabled will be false.

In case of 2 site replication, only one mate site exists so enabled will be false.

/db-replication-svc/dbreplsvcdeployments[2]/mysql/primaryhost The CLUSTER-IP from the ndbmysqldsvc-4 LoadBalancer service. ndbmysqld-4.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-4.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-4.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-4 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-4 with cnc-db-prod-ndbmysqld-4.

/db-replication-svc/dbreplsvcdeployments[2]/mysql/primarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-4 LoadBalancer service "" EXTERNAL-IP ip address assigned to the ndbmysqldsvc-4 service, which is used for the establishing the primary geo replication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/primaryhostserverid The unique SQL server id for the primary sql node. 1004 For site 1 primary sql node, set it to 1004, for site 2 primary sql node, set it to 2004, for site 3 primary sql node, set it to 3004, for site 4 primary sql node, set it to 4004.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondaryhost The CLUSTER-IP from the ndbmysqldsvc-5 LoadBalancer service. ndbmysqld-5.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-5.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN. For example, ndbmysqld-5.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-5 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-5 with cnc-db-prod-ndbmysqld-5.

/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-5 LoadBalancer service. "" EXTERNAL-IP ip address assigned to the ndbmysqldsvc-5 service, which is used for the establishing the secondary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondaryhostserverid The unique SQL server id for the secondary sql node. 1005 For site 1 primary sql node, set it to 1005, for site 2 primary sql node, set it to 2005, for site 3 secondary sql node, set it to 3005, for site 4 secondary sql node, set it to 4005.
/db-replication-svc/dbreplsvcdeployments[2]/replication/localsiteip Local site replication service external IP assigned to the replication service. "" EXTERNAL-IP address assigned to the <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc in current Site.
/db-replication-svc/dbreplsvcdeployments[2]/replication/localsiteport Local site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[2]/replication/matesitename third mate site for the current site that is being installed. <${OCCNE_THIRD_MATE_SITE_NAME}>

replace <${OCCNE_THIRD_MATE_SITE_NAME}> with the OCCNE_THIRD_MATE_SITE_NAME you used.

For example: redsea

/db-replication-svc/dbreplsvcdeployments[2]/replication/remotesiteip Mate site replication service external IP for establishing georeplication. ""

If deploying cndbtier site1, use "";

If deploying cndbtier site2, use "";

If deploying cndbtier site3, use "";

If deploying cndbtier site4 use EXTERNAL-IP from site3 occne-db-replication-svc LoadBalancer service

Use the value from the
OCCNE_THIRD_MATE_REPLICATION_SVC

Environment variable.

/db-replication-svc/dbreplsvcdeployments[2]/replication/remotesiteport Mate site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[2]/labels Provide specific pod labels apart from commonlabels. {} Set the labels for db-replication apart from commonlabels like in this format. For example: app-home: cndbtier
/db-replication-svc/dbreplsvcdeployments[2]/service/loadBalancerIP Fixed LoadBalncer IP for <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc "" Configure the LoadBalancer IP that must be assigned to <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc
/db-replication-svc/dbreplsvcdeployments[2]/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/db-replication-svc/startupProbe/initialDelaySeconds Kubernetes pod configuration that specifies the number of seconds to wait before initiating the first health check for a container. 60 NA
/db-replication-svc/startupProbe/successThreshold Kubernetes pod configuration that specifies the minimum number of consecutive successful health checks required for a probe to be considered successful. 1 NA
/db-replication-svc/startupProbe/failureThreshold Kubernetes pod configuration that specifies the maximum number of consecutive failed health checks before a container is considered to be failed. 30 If the container fails, the pod will be restarted.
/db-replication-svc/startupProbe/periodSeconds Kubernetes pod configuration that determines the interval (in seconds) between consecutive health checks performed on a container. 10 NA
/db-replication-svc/startupProbe/timeoutSeconds Kubernetes pod configuration that specifies the maximum amount of time (in seconds) to wait for a response from a container during a health check before considering it a failure. 1 NA
/db-replication-svc/numberofparallelbackuptransfer Number of threads created for transferring backups of multiple data nodes in parallel. 4 Each thread transfers the backup of one data node.
/db-replication-svc/grrecoveryresources/limits/cpu The maximum limit of CPU count allocated for the replication service deployment that restores the cluster using the backup. 2 The maximum amount of CPU that Kubernetes allocates for the replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/limits/memory The maximum limit of memory size allocated for the replication service deployment that restores the cluster using the backup. 12Gi The maximum amount of memory size that Kubernetes allocates for the replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/limits/ephemeral-storage The maximum limit of ephemeral storage size allocated for the db-replication-svc pod. 1Gi Ephemeral storage Limits for each of the db-replication-svc pod.
/db-replication-svc/grrecoveryresources/requests/cpu The required CPU count allocated for the replication service deployment that restores the cluster using the backup. 2 The CPU allotment for each replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/requests/memory The required memory size allocated for the replication service deployment that restores the cluster using the backup. 12Gi The memory size allotment for each replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/requests/ephemeral-storage Required ephemeral storage size allocated for the db-replication-svc pod. 90Mi Ephemeral storage allotment for each db-replication-svc pod.
/db-replication-svc/resources/limits/cpu The maximum limit of CPU count allocated for the DB replication service pods. 1 The maximum amount of CPU that Kubernetes allocates for each db-replication-svc pod to use.
/db-replication-svc/resources/limits/memory The maximum memory size allocated for the DB replication service pods. 2048Mi The maximum amount of memory size that Kubernetes allocates for each db-replication-svc pod.
/db-replication-svc/resources/limits/ephemeral-storage The maximum limit of ephemeral storage size allocated for the DB replication service pods. 1Gi The ephemeral storage limit for each db-replication-svc pod.
/db-replication-svc/resources/requests/cpu The required CPU count allocated for the DB replication service pods. 1 The CPU allotment for each db-replication-svc pod.
/db-replication-svc/resources/requests/memory The required memory size allocated for the DB replication service pods. 2048Mi The memory allotment for each db-replication-svc pod.
/db-replication-svc/resources/requests/ephemeral-storage The required ephemeral storage size allocated for the DB replication service pods. 90Mi The ephemeral storage allotment for each of the db-replication-svc pod
/db-replication-svc/initcontainer/image/repository The name of the docker image of the mysql ndb client. cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively. For example: cndbtier-mysqlndb-client.
/db-replication-svc/initcontainer/image/tag Version for the docker image of mysql ndb client. 23.4.7 Change it to the actually version of the docker image respectively. For example: 23.4.7.
/db-replication-svc/InitContainersResources/limits/ephemeral-storage The maximum limit of ephemeral storage size allocated for the mysqlndbclient. 1Gi Ephemeral Storage Limits for mysqlndbclient
/db-replication-svc/InitContainersResources/requests/ephemeral-storage Required ephemeral storage size allocated for the mysqlndbclient. 90Mi Ephemeral Storage allotment for mysqlndbclient
/db-replication-svc/enableInitContainerForIpDiscovery Enable discovering the Loadbalancer IP address for ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n LoadBalancer services. true

Enable discovering the Loadbalancer IP address for ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n LoadBalancer services in db_replication_svc pod.

Set this value to true if the Kubernetes service type of the ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n is "LoadBalancer" then db_replication_svc pods will get the external ip's from ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n services.

Set this value to false if the Kubernetes service type of the ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n is "LoadBalancer" and the external IPs are assigned to the ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n pod services using the external Load Balancers (For example: F5 Loadbalancer).

3.7 DB Monitor Service Parameters

The following table provides a list of database monitor service parameters.

Table 3-7 DB Monitor Service Parameters

Parameter Description Default Value Notes
/db-monitor-svc/schedulertimer The frequency (at millisecond) at which the monitor service must check if binlog injector thread in every replication SQL node is stalled or not. 5000 The default value is 5000 milliseconds (5 seconds). This means that, for every five seconds, the DB monitor service checks every replication SQL node to see if the bin log injector thread is stalled or not.
/db-monitor-svc/onDemandFetchApproach Indicates if on demand metrics fetch approach is enabled or disabled in monitor service. true When this parameter is set to true, the system fetches the metrics on demand. When set to false, the system fetches metrics using a cached approach with a the help of a scheduler.
/db-monitor-svc/binlogthreadstore/capacity The capacity upto which you want to store and track the bin log position change with respect to the bin log injector tracker. 5 The default value is 5. This means that, the previous 5 bin log position changes with respect to the bin log injector are stored. These values are used to compare whether the values are changing or not. If the value is not changing, it means bin log injector is stalled.
/db-monitor-svc/image/repository The name of the docker image of the DB monitor service. db_monitor_svc Change this value to the actual docker image path on your docker registry respectively. For example: db_monitor_svc.
/db-monitor-svc/image/tag Version for the docker image of db monitor service. 23.4.7 Change it to the version of the docker registry. For example, 23.4.7.
/db-monitor-svc/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the db-monitor-svc pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the monitor service pods to be scheduled on worker node with label nodetype=monitorsvc, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: monitorsvc

nodeSelector is disabled if this parameter is passed empty.

/db-monitor-svc/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/db-monitor-svc/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/db-monitor-svc/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the monitor service pods must be scheduled.
For example, if you want the monitor service pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/db-monitor-svc/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/db-monitor-svc/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where monitor service pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the monitor service pods.

For example, if you want the monitor service pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - dbtierservice
    - monitor

In this case, more preference is given to the worker nodes with label matching to node type dbtierservice or monitor as this rule has a greater value for weight.

/db-monitor-svc/resources/limits/ephemeral-storage Maximum limit ephemeral-storage size allocated for the db-monitor-svc pod 1Gi Ephemeral storage Limits for each of the db-monitor-svc pod
/db-monitor-svc/resources/requests/ephemeral-storage Required Ephemeral storage size allocated for the db-monitor-svc pod 90Mi Ephemeral storage allotment for each of the db-monitor-svc pod
/db-monitor-svc/restartSQLNodesIfBinlogThreadStalled Indicates if the SQL node is restarted when the binlog threads stall. true If this parameter is set to true, the monitor service checks if any ndbapp or ndbmysqld pod binlog thread is stalled and restarts that pod.

3.8 DB Backup Manager Service Parameters

The following table provides a list of database backup manager service parameters.

Table 3-8 DB Backup Manager Service Parameters

Parameter Description Default Value Notes
/db-backup-manager-svc/scheduler/cronjobExpression The scheduled time at which the backup service must be run. 0 0 */7 * * By default, the backup service is run once in every seven days. Configure this parameter as per your requirement. For example, if you want to run the backup service once in every two days, then the value must be set to "0 0 */2 * *".
/db-backup-manager-svc/deletePurgedRecords/enabled Indicates if old purged backup record entries are deleted from backup_info.DBTIER_BACKUP_INFO. true Set this parameter to false if you don't want to delete the database entries of purged backups.

Set this parameter to true if you want to delete the database entries of purged backups older than the number of days specified in /db-backup-manager-svc/deletePurgedRecords/retainPurgedBackupForDays.

/db-backup-manager-svc/deletePurgedRecords/schedulerInterval Defines the scheduler interval (in days) in which the scheduler checks if there are purged entries to be deleted from backup_info.DBTIER_BACKUP_INFO table. 1 Set this parameter to the interval (in days) in which you want to run the scheduler to check if there are purged entries to be deleted.

For example, setting this parameter to 2 indicates that the scheduler is run every two days to check if there are purged entries to be deleted.

/db-backup-manager-svc/deletePurgedRecords/retainPurgedBackupForDays Indicates the number of days for which the purged backup records are retained. 30 Set this parameter to the number of days for which you want to retain the purged backup entries in database tables.

By default, the database tables retain the purged backup entries for 30 days. The entries that are older than 30 days are deleted if /db-backup-manager-svc/deletePurgedRecords/enabled is set to true.

/db-backup-manager-svc/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the db-backup-manager-svc pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the db-backup-manager-svc pods to be scheduled on worker node with label nodetype=backupmgrsvc, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: backupmgrsvc

nodeSelector is disabled if this parameter is passed empty.

/db-backup-manager-svc/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/db-backup-manager-svc/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/db-backup-manager-svc/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the backup manager service pods must be scheduled.
For example, if you want the backup manager service pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/db-backup-manager-svc/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/db-backup-manager-svc/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where backup manager service pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the backup manager service pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - dbtierservice
    - backup

In this case, more preference is given to the worker nodes with label matching to node type dbtierservice or backup as this rule has a greater value for weight.

/db-backup-manager-svc/image/repository Name of the docker image of db backup manager service. db_backup_manager_svc Change it to the docker image path on your docker registry. For example, db_backup_manager_svc.
/db-backup-manager-svc/image/tag Version for the docker image of db backup service. 23.4.7 Change it to the version of the docker registry. For example, 23.4.7.
/db-backup-manager-svc/resources/limits/ephemeral-storage Max limit ephemeral-storage size allocated for the db-backup-manager-svc pod 1Gi Ephemeral storage Limits for each of the db-backup-manager-svc pod
/db-backup-manager-svc/resources/requests/ephemeral-storage Required Ephemeral storage size allocated for the db-backup-manager-svc pod 90Mi Ephemeral storage allotment for each of the db-backup-manager-svc pod

3.9 Post Upgrade Job Parameters

The following table provides a list of parameters to be configured for post upgrade.

Table 3-9 Post Upgrade Job Parameters

Parameter Description Default Value Notes
/postUpgradeJob/image/repository Name of the docker image of postUpgradeJob service cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively.

For example: cndbtier-mysqlndb-client

/postUpgradeJob/image/tag Version for the docker image of postUpgradeJob service 23.4.7 Change it to the actually version of the docker image respectively.

For example: 23.4.7

/postUpgradeJob/resources/limits/ephemeral-storage Max limit ephemeral-storage size allocated for the postUpgradeJob pod 1Gi Ephemeral storage Limits for each of the postUpgradeJob pod
/postUpgradeJob/resources/requests/ephemeral-storage Required Ephemeral storage size allocated for the postUpgradeJobpod 90Mi Ephemeral storage allotment for each of the postUpgradeJob pod

3.10 Preupgrade Job Parameters

The following table provides a list of parameters to be configured for preupgrade.

Table 3-10 Preupgrade Job Parameters

Parameter Description Default Value Notes
/preUpgradeJob/image/repository Name of the docker image of preUpgradeJob service cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively.

For example: cndbtier-mysqlndb-client

/preUpgradeJob/image/tag Version for the docker image of preUpgradeJob service 23.4.7 Change it to the actually version of the docker image respectively.

For example: 23.4.7

.
/preUpgradeJob/resources/limits/ephemeral-storage Max limit ephemeral-storage size allocated for the preUpgradeJob pod 1Gi Ephemeral storage Limits for each of the postUpgradeJob pod
/preUpgradeJob/resources/requests/ephemeral-storage Required Ephemeral storage size allocated for the preUpgradeJob pod 90Mi Ephemeral storage allotment for each of the postUpgradeJob pod