3 Customizing cnDBTier

This section describes the configuration parameters required during the installation of cnDBTier.

Note:

This section contains only the frequently used cnDBTier parameters that are available in the custom_values.yaml file. The custom_values.yaml file contains additional MySQL parameters that can be configured as per your requirement. For detailed list of MySQL configurations and their usage, refer to MySQL Documentation.

3.1 Global Parameters

The following table provides a list of global parameters.

Table 3-1 Global Parameters

Parameter Description Default Value Notes
/global/repository Repository URL for cnDBTier images. docker_repo:5000/occne Change it to the path of your docker registry. For example, occne-repo-host:5000/occne.
/global/siteid The ID of the CNE cluster site you are going to install. 1

Each site must be assigned with a unique site identifier. Site ID must be assigned as 1,2,3, and 4 for site1,site2,site3, and site4 respectively.

For example:
  • For single site, the site ID must be given as 1.
  • For a two site replication setup, the site IDs must be given as 1 and 2 for site 1 and site 2 respectively.
  • For a three site replication setup, the site IDs must be given as 1, 2, and 3 for site 1, site 2, and site 3 respectively.
  • For a four site replication setup, the site IDs must be given as 1, 2, 3 and 4 for site 1, site 2, site 3, and site 4 respectively.
/global/sitename The name of the CNE cluster site you are going to install. cndbtiersitename This parameter must be set to the name of the current cluster.
/global/image/tag Indicates the docker image version of mysqlndbcluster used for launching MySQL NDB cluster. 24.3.1 Change this parameter to the version of the docker image. For example, 24.3.1.
/global/image/imagePullPolicy The image pull policy for the cnDBTier helm chart. IfNotPresent NA
/global/mgmReplicaCount Count of the MySQL management nodes created in the MySQL NDB cluster. 2 This parameter defines the number of management nodes in the cluster.
/global/ndbReplicaCount Count of the MySQL data nodes created in the MySQL NDB cluster. 4 This parameter must be set to an even value. For example, 2, 4, 6.
/global/apiReplicaCount Count of the MySQL nodes that are participating in georeplication created in the MySQL NDB cluster. 2

This parameter defines the number of SQL nodes that are participating in georeplication.

In case of standalone site (one site without replication), no SQL nodes are required. In this case, set this parameter to 0.

In case of two-site replication, the minimum SQL nodes required is 2.

In case of three-site replication, the minimum SQL nodes required is 4.

/global/ndbappReplicaCount Count of the MySQL nodes that are not participating in georeplication created in the MySQL NDB cluster. 2 This parameter defines the number of SQL nodes in the cluster that will be used by the NF's and not participating in georeplication.
/global/ndbappReplicaMaxCount Maximum count of the MySQL non- georeplication SQL nodes that can be automatically scaled by the horizontal pod autoscaler. 4 This value should be greater than '/gobal/ndbappreplicaCount'.
/global/domain The cluster name of Kubernetes cluster on which cnDBTier is deployed. cluster.local Set this parameter to the name of the Kubernetes cluster on which cnDBTier is installed. For example, occne1-cgbu-cne-dbtier.
/global/namespace The Kubernetes namespace in which the cnDBTier is deployed. occne-cndbtier NA
/global/storageClassName The storage class used for allocating the PV. occne-dbtier-sc

Storage class used.

By default occne-dbtier-sc is the storage class. It can be changed to any storage class name which is currently configured in the cluster.

/global/useasm Indicates if Aspen mesh service is enabled in the namespace or not. false Set this parameter to true if aspen mesh service is enabled in the namespace.
/global/tls/enable Indicates if Transport Layer Security (TLS) is enabled for replication. false Set this parameter to true if you want to enable TLS for replication.
/global/tls/caCertificate When TLS is enabled, this parameter provides the name of the CA certificate that is configured in the "cndbtier-trust-store-secret" secret. "" Use this parameter to configure the CA certificate when TLS is enabled.
/global/tls/tlsversion When TLS is enabled, this parameter defines the TLS version that must be used. TLSv1.3 Set this parameter to a valid TLS version that must be used for encrypting the connection between replication channels.
/global/tls/tlsMode When TLS is enabled, this parameter defines the TLS mode that must be used. VERIFY_CA Set this parameter to a valid TLS mode that must be used for replication.

For example: VERIFY_CA, VERIFY_IDENTITY, or NONE.

/global/tls/ciphers When TLS is enabled, this parameter defines the cipher that is used for replication.

- TLS_AES_128_GCM_SHA256

- TLS_AES_256_GCM_SHA384

- TLS_CHACHA20_POLY1305_SHA256

- TLS_AES_128_CCM_SHA256

List the valid TLS ciphers that can be used for replication. The server selects the first cipher that it supports from the list of ciphers provided.
/global/tls/certificates[0]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-0.
/global/tls/certificates[0]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-0.
/global/tls/certificates[0]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-0.
/global/tls/certificates[0]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-0. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-0.
/global/tls/certificates[1]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-1.
/global/tls/certificates[1]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-1.
/global/tls/certificates[1]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-1.
/global/tls/certificates[1]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-1. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-1.
/global/tls/certificates[2]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-2.
/global/tls/certificates[2]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-2.
/global/tls/certificates[2]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-2.
/global/tls/certificates[2]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-2. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-2.
/global/tls/certificates[3]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-3.
/global/tls/certificates[3]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-3.
/global/tls/certificates[3]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-3.
/global/tls/certificates[3]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-3. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-3.
/global/tls/certificates[4]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-4.
/global/tls/certificates[4]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-4.
/global/tls/certificates[4]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-4.
/global/tls/certificates[4]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-4. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-4.
/global/tls/certificates[5]/serverCertificate When TLS is enabled, this parameter defines the name of the server certificate that is configured in the "cndbtier-server-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the server certificate for ndbmysqld-5.
/global/tls/certificates[5]/serverCertificateKey When TLS is enabled, this parameter defines the name of the server certificate key that is configured in the "cndbtier-server-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the server certificate key for ndbmysqld-5.
/global/tls/certificates[5]/clientCertificate When TLS is enabled, this parameter defines the name of the client certificate that is configured in the "cndbtier-client-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the client certificate for ndbmysqld-5.
/global/tls/certificates[5]/clientCertificateKey When TLS is enabled, this parameter defines the name of the client certificate key that is configured in the "cndbtier-client-secret" secret for ndbmysqld-5. "" When TLS is enabled, use this parameter to configure the client certificate key for ndbmysqld-5.

Note: If you have a multichannel replication group setup and there are more ndbmysqld pods, then configure the certificates similarly by appending the certificate list for each pod.

/global/useIPv6 Indicates if Kubernetes is used in IPv6 only or in dual-stack mode. false Set this parameter to true if you want cnDBTier to create a service account.
/global/useVCNEEgress Indicates if Kubernetes supports Egress and requires adding specific Egress annotations to cnDBTier. false Set this parameter to true if Kubernetes supports Egress and requires cnDBTier to add specific Egress annotations to cnDBTier helm chart.
/global/version The version of the cnDBTier helm chart. 24.3.1 This parameter is set to the current version of the cnDBTier helm chart.
/global/autoscaling/ndbapp/enabled Indicates if autoscaling is enabled or disabled for the non-georeplication SQL pods. false Set this parameter to true if you want to enable autoscaling for the non-georeplication SQL pods, otherwise set it to false. If set to true, cnDBTier needs Service Account created for autoscaling. Either enable "/global/serviceAccount/create" configuration or provide an already existing Service Account name in "/global/serviceAccount/name" configuration.
/global/inframonitor/enable Indicates if PVC health monitoring is enabled for all the cnDBTier components. true When this parameter is set to true, the system enables PVC health monitoring for all the cnDBTier pods that are attached with pvc.
/global/inframonitor/pvchealth/mgm Indicates if PVC health monitoring is enabled for the mgm pods. true When this parameter is set to true, the system enables PVC health monitoring for the mgm pods.
/global/inframonitor/pvchealth/ndb Indicates if PVC health monitoring is enabled for the ndb pods true When this parameter is set to true, the system enables PVC health monitoring for the ndb pods.
/global/inframonitor/pvchealth/api Indicates if PVC health monitoring is enabled the API pods true When this parameter is set to true, the system enables PVC health monitoring for the api pods.
/global/multus/enable Indicates if multus support for cnDBTier is enabled. false Set this parameter to true if you want to support multus with cnDBTier.
/global/multus/serviceAccount/create If this parameter is set to true, the system creates the service account needed for multus. true Set this parameter to false if you do not want to create the service account needed for multus.
/global/multus/serviceAccount/name If a service account name is defined using this parameter, then the system creates the service account using the given name. If service account is already created, then the system uses the same service account. "cndbtier-multus-serviceaccount" If the serviceAccount.create is true and the name is given, then the service account is created with the given name. If the serviceAccount.create is false and the name is given, then the cnDBTier assumes that the service account is already created and it uses the same service account instead of creating a new one.
/global/networkpolicy/enabled Indicates if network policy is enabled for cnDBTier. false Set this parameter to true if you want to enable network policy.
/global/backupencryption/enable Indicates if backup encryption is enabled. If this parameter is set to true, the system encrypts all the backups using the encryption password stored in the occne-backup-encryption-secret file. true Set this parameter to true if you want to encrypt the backups.

Before setting this parameter to true, follow either step 1 or step 5 of Creating Secrets to create the backup encryption secret.

/global/backupencryption/backupencryptionsecret Specifies the name of the backup encryption secret that holds the encryption password for backups. occne-backup-encryption-secret NA
/global/remotetransfer/enable Indicates if secure transfer of backup is enabled. If this parameter is set to true, then the system transfers all the backups including the routine and on-demand backups to a remote server. However, the system doesn't transfer the georeplication recovery backups. false Set this parameter to true if you want to secure the transfer of backups to a remote server.
/global/remotetransfer/faultrecoverybackuptransfer Indicates if georeplication recovery backups are transferred to a remote server. true This parameter has no effect if /global/remotetransfer/enable is set to false.

If this parameter is set to true along with /global/remotetransfer/enable, then the system transfers all the georeplication recovery backups to a remote server.

If /global/remotetransfer/enable is set to true and this parameter is set to false, the system doesn't transfer any of the georeplication recovery backups to a remote server.

/global/remotetransfer/remoteserverip The IP address of the remote server where the backups are transferred. "" Configure this parameter with the IP address of the remote SFTP server where backups are to be transferred and stored in a ZIP format.
/global/remotetransfer/remoteserverport The port number of the remote SFTP server where the backups are transferred. "" Configure this parameter with the port number of the remote SFTP server where backups are to be transferred and stored in a ZIP format.
/global/remotetransfer/remoteserverpath The path in the remote SFTP server where the backups are stored. "" Configure this parameter with the path (location) in the remote SFTP server where backups are to be transferred and stored in a ZIP format.
/global/serviceAccount/create If set to true, then cnDBTier creates a service account. true NA
/global/serviceAccount/name If set to any value other than an empty string, cnDBTier either uses the same name while creating the service account or uses the existing service account with the given name. ""
  • If global.serviceAccount.create=true and global.serviceAccount.name="mysql-cluster-service-reader", then cndbtier creates a service account with name "mysql-cluster-service-reader". (If a Service Account with the name "mysql-cluster-service-reader" already exists, then it throws an error).
  • If global.serviceAccount.create=false and global.serviceAccount.name="mysql-cluster-service-reader", then cndbtier uses the already existing service account with name "mysql-cluster-service-reader". (If a service account with the name "mysql-cluster-service-reader" does not exist, then it throws an error).
/global/serviceAccountForUpgrade/create If the value is set to true, then the system creates a service account that is used by the post-upgrade and pre-upgrade hook during the cnDBTier upgrade. true Set this parameter to true if you want to create a service account for upgrade.
/global/serviceAccountForUpgrade/name Indicates the name of the service account. "cndbtier-upgrade-serviceaccount"
  • If global.serviceAccountForUpgrade.create=true and global.serviceAccountForUpgrade.name="" then cnDBtier creates a service account with default name.
  • If global.serviceAccountForUpgrade.create=true and global.serviceAccountForUpgrade.name="<name>" then cnDBtier creates a service account with given name
  • If global.serviceAccountForUpgrade.create=false and global.serviceAccountForUpgrade.name="<name>" then cnDBtier will not create a service account and will assume there is a existing service account with <name> which it will use for upgrade operation.
  • If global.serviceAccountForUpgrade.create=false and global.serviceAccountForUpgrade.name="" then cnDBtier will not create a service account and will not use any service account for upgrade.
/global/automountServiceAccountToken If set to true, then the system mounts the token of default service account to pods. false If this parameter is set to false, the token of the default service account will not be mounted on the pods.

If this parameter is set to true, the token of the default service account will be mounted on the pods.

Note: This can cause a security issue because the token of the default service account is mounted to the pod." Because, 'secret' word has not been used before in this description.

/global/prometheusOperator/alerts/enable Indicates if the Prometheus alert rules are loaded to the prometheusOperator environment automatically or manually. false
  • Set the value to true, if you want to load Prometheus alert rules automatically.
  • Set the value to false, if you want to load the Prometheus alert rules manually or use Prometheus configmap.
/global/k8sResource/container/prefix Kubernetes resource added as a prefix to container. ""

By default, no prefix or suffix is added.

/global/k8sResource/pod/prefix Kubernetes resource added as a prefix to container. ""

By default, no prefix or suffix is added.

/global/https/enable Indicates whether https mode is enabled or disabled false Set this parameter to true if you want to use https instead of http, follow steps mentioned in Creating Secrets before making this true.
/global/encryption/enable Indicates whether the passwords stored in the database can be encrypted. false Set this parameter to true if user wants to encrypt passwords stored in Database, follow steps mentioned in Creating Secrets before making this true.
/global/commonlabels Indicates the common labels for all containers. "" Add the value for this attribute, if you want to add Common labels for all containers.
/global/use_affinity_rules Indicates whether or not to use affinity rules. true Turn the affinity rules off, when MySQL cluster runs on small Kubernetes clusters with less than the required Kubernetes nods, for testing in some small systems.
/global/additionalndbconfigurations/mgm/HeartbeatIntervalMgmdMgmd Specifies the heartbeat interval time between mgm nodes (in milliseconds). 2000 Specify the interval between heartbeat messages that is used to determine whether another management node is in contact with the current one.
/global/additionalndbconfigurations/mgm/TotalSendBufferMemory Specifies the total amount of memory allocated on the node for shared send buffer memory among all configured transporters. 16M NA
/global/additionalndbconfigurations/ndb/__TransactionErrorLogLevel This parameter controls the logging behavior for transaction-related errors. It determines the level of detail and mode of the logs associated with transaction timeouts and other related errors at the data node level. 0x0000 This parameter uses a composite value with several optional bits to specify when and what to log.
The last bit indicates the frequency in which errors are to be logged:
  • 0x0000: Do not log at all.
  • 0x0001: Always log.
  • 0x0002: Log only for deferred constraints.
The third last bit indicates the type of errors to be logged:
  • 0x0100: Log transaction aborts.
  • 0x0200: Log transaction timeouts (TC).
  • 0x0400: Log transaction timeouts (TC+LDM).
For example:
  • 0x0000: Indicates that the system doesn't log any errors.
  • 0x101: Indicates that the system always logs transaction aborts.
  • 0x301: Indicates that the system always log transaction aborts and timeouts at TC.
  • 0x701: Indicates that the system always logs transaction aborts, timeouts at TC, and timeouts at TC+LDM.
  • 0x702: Indicates that the system logs transaction aborts, timeouts at TC, and timeouts at TC+LDM only when a transaction uses deferred constraints.
/global/additionalndbconfigurations/ndb/TotalSendBufferMemory Specifies the total amount of memory allocated on the node for which it is set for use among all configured transporters. 32M If this parameter is set, then its minimum value must be 256KB and the maximum value must be 4294967039 bytes.
/global/additionalndbconfigurations/ndb/CompressedLCP Indicates if compression is enabled for local checkpoint files. true NA
/global/additionalndbconfigurations/ndb/TransactionDeadlockDetectionTimeout This parameter sets the amount of time a transaction can spend running within a data node. 1200 The value of this parameter must be maintained at the default value of 1200 and must not be set to a value greater than 2750. Setting a value greater than 2750 can hamper the troubleshooting activity with misleading log entries, and cause connection abort.
/global/additionalndbconfigurations/ndb/HeartbeatIntervalDbDb Specifies how often the heartbeat signals are sent and how often the system can expect to receive them. 1250 A node is declared dead after missing four heartbeat intervals in a row. Therefore, the maximum time for discovering a failure through the heartbeat mechanism is five times the heartbeat interval.
/global/additionalndbconfigurations/ndb/ConnectCheckIntervalDelay Enables connection checking between data nodes after one of the data nodes fails heartbeat checks for 5 intervals of up to HeartbeatIntervalDbDb milliseconds. Such a data node that further fails to respond within an interval of ConnectCheckIntervalDelay milliseconds is considered as a suspect and is considered dead after two such intervals. 0 The default value 0 indicates that the connection checking is disabled.
/global/additionalndbconfigurations/ndb/LockPagesInMainMemory This flag locks a process into memory, preventing any swapping to disk. This helps to ensure the cluster's real-time performance. 0 The possible values are 0, 1, and 2, where:
  • 0, indicates that the locking is disabled.
  • 1, indicates that the locking is done after allocating memory for the process.
  • 2, indicates that the locking is done before allocating memory for the process.
/global/additionalndbconfigurations/ndb/MaxNoOfConcurrentOperations Specifies the maximum number of concurrent operations that the data node can handle. This includes all types of operations such as read, write, update, and delete operations that require the data node's resources. 128K This parameter must be set at least to the number of records to be updated simultaneously in transactions, divided by the number of cluster data nodes.

Each transaction involves at least one operation, therefore the value of MaxNoOfConcurrentOperations must always be greater than or equal to the value of MaxNoOfConcurrentTransactions.

/global/additionalndbconfigurations/ndb/MaxNoOfConcurrentTransactions Specifies the maximum number of concurrent transactions that the data node can handle. 65536 This parameter must be set to at least to the maximum number of tables accessed in any single transaction+1, multiplying by the number of SQL nodes, and then dividing by the total number of data nodes.
/global/additionalndbconfigurations/ndb/MaxNoOfUniqueHashIndexes Specifies the maximum number of unique hash indexes that can be created. 16K NA
/global/additionalndbconfigurations/ndb/MaxNoOfAttributes Specifies the recommended maximum number of attributes that can be defined in the cluster. 5000 Set the value of this parameter as per your requirement. For example:5000
/global/additionalndbconfigurations/ndb/MaxNoOfOrderedIndexes Specifies the total number of ordered indexes that can be in use in the system at a time. 1024 NA
/global/additionalndbconfigurations/ndb/NoOfFragmentLogParts Specifies the number of log file groups for redo logs. 4 Set this parameter to the required number of log file groups for redo logs. For example: 4
/global/additionalndbconfigurations/ndb/MaxNoOfExecutionThreads Specifies the number of execution threads used by ndbmtd. 8 The value of this parameter can range between 2 and 72.
/global/additionalndbconfigurations/ndb/StopOnError Specifies whether a data node process must exit or perform an automatic restart when an error condition is encountered. 0 By default, the data node process is configured to perform an automatic restart on encountering an error condition.

Set this parameter to 1 if you want the data node process to halt and exit.

/global/additionalndbconfigurations/ndb/MaxNoOfTables The recommended maximum number of table objects for a cluster. 1024 NA
/global/additionalndbconfigurations/ndb/NoOfFragmentLogFiles The number of REDO log files for the node. 128 Set this parameter to the required number of REDO log files for the node. Fir example: 128.
/global/additionalndbconfigurations/ndb/FragmentLogFileSize Specifies the size of each individual REDO log file. 16M NA
/global/additionalndbconfigurations/ndb/ODirect Indicates if NDB uses O_DIRECT writes for Local Check Point (LCP), backups, and redo logs, which reduces the kswapd and CPU usage. false NA
/global/additionalndbconfigurations/ndb/RedoBuffer Specifies the size of the buffer in which the REDO log is written. 32M The minimum value of this parameter must be1MB.
/global/additionalndbconfigurations/ndb/SchedulerExecutionTimer Specifies the time (microseconds) for which the threads run in the scheduler before being sent. 50 Setting the parameter to 0 minimizes the response time. To achieve higher throughput, increase the value at the expense of longer response times.
/global/additionalndbconfigurations/ndb/SchedulerSpinTimer Specifies the time (microseconds) for which the threads run in the scheduler before sleeping. 0 NA
/global/additionalndbconfigurations/ndb/TimeBetweenEpochs Defines the interval between the synchronization epochs for NDB cluster replication. 100 NA
/global/additionalndbconfigurations/ndb/TimeBetweenGlobalCheckpoints Specifies the time interval that elapses between the completion of one global checkpoint and the initiation of the next global checkpoint in a distributed or transactional system. 2000 A global checkpoint is a point in time at which a snapshot of all committed transactions is taken and persisted to stable storage.
/global/additionalndbconfigurations/ndb/TimeBetweenLocalCheckpoints Specifies the time interval between two successive local checkpoints in a distributed or transactional system. 20 Unlike global checkpoints, which capture the state of the entire system, local checkpoints are specific to individual nodes or segments of the system.
/global/additionalndbconfigurations/ndb/TimeBetweenEpochsTimeout Defines the timeout for synchronization epochs for NDB cluster replication. If a node fails to participate in a global checkpoint within the time determined by this parameter, the node is shut down. 4000 NA
/global/additionalndbconfigurations/ndb/TimeBetweenGlobalCheckpointsTimeout Defines the minimum timeout between global checkpoints. 60000 NA
/global/additionalndbconfigurations/ndb/RedoOverCommitLimit Defines the upper limit (in seconds) for trying to write a given redo log to disk before timing out. 60 NA
/global/additionalndbconfigurations/ndb/RedoOverCommitCounter Defines the number of times the RedoOverCommitLimit can be exceeded. The number of times the data node tries to flush this redo log, but takes longer than the limit set in the RedoOverCommitLimit parameter, is recorded and compared with the limit set in this parameter. When this limit exceeds, any transactions that were not committed as a result of the flush timeout are aborted. 3 NA
/global/additionalndbconfigurations/ndb/StartPartitionedTimeout Specifies the time till which the cluster waits when the cluster is ready to start after waiting for the StartPartialTimeout (milliseconds) limit, but it is still in a partitioned state. 1800000 StartPartialTimeout specifies the time until which the cluster waits for all data nodes to come up before the cluster initialization routine is invoked. The StartPartialTimeout value is calculated automatically depending on the number of data nodes as follows:

(delayPerDataPod * 1000) * Number of data nodes

where,
  • delayPerDataPod is configured to 60 seconds, by default
  • Number of Data Nodes, indicates the number of data nodes in the cluster.
/global/additionalndbconfigurations/ndb/CompressedBackup This parameter is used to enable the backup compression in each of the data nodes. true If this parameter is set to true, the backups in each of the data nodes are compressed.

If this parameter is set to false, the backups in each of the data nodes are not compressed.

/global/additionalndbconfigurations/ndb/MaxBufferedEpochBytes Specifies the total number of bytes allocated for buffering epochs by the node. 26214400 NA
/global/additionalndbconfigurations/ndb/MaxBufferedEpochs Specifies the number of unprocessed epochs by which a subscribing node can lag behind. 100 On exceeding the set value, the lagging subscriber is disconnected.
/global/additionalndbconfigurations/ndb/TimeBetweenWatchDogCheck This parameter specifies the number of milliseconds between watchdog checks. A watchdog thread checks the main thread to prevent the main thread from getting stuck in an endless loop at some point. 800 If a process remains in the same state after three watchdog checks, the watchdog thread terminates the process. This parameter is used in ndbmtd process.
/global/additionalndbconfigurations/api/TotalSendBufferMemory Specifies the total amount of memory allocated on the node for shared send buffer memory among all configured transporters. 32M NA
/global/additionalndbconfigurations/api/DefaultOperationRedoProblemAction Specifies how the data node handles operations when more time is taken to flush redo logs to disk. This occurs when a given redo log flush takes longer than RedoOverCommitLimit for more than RedoOverCommitCounter times, causing any pending transactions to be aborted. ABORT The possible values are as follows:
  • ABORT: Any pending operations from aborted transactions are also aborted.
  • QUEUE: Pending operations from transactions that were aborted are queued up to be re-tried. Pending operations are still aborted when the redo log runs out of space.
/global/additionalndbconfigurations/mysqld/max_connect_errors Specifies the error limit after which the successive connection requests from a host are interrupted without a successful connection and the server blocks that host from further connections. 4294967295 If a connection from a host is established successfully within the limit specified in this parameter, after a previous connection was interrupted, then the error count for the host is cleared to zero.
/global/additionalndbconfigurations/mysqld/ndb_applier_allow_skip_epoch Indicates whether the replication applier is allowed to skip epochs during replication. 0 The possible values are as follows:
  • 0: The replication applier is not allowed to skip epochs.
  • 1: The replication applier is allowed to skip epochs.
/global/additionalndbconfigurations/mysqld/ndb_batch_size This parameter is used to set the size (in bytes) for NDB transaction batches. 2000000 Set the size in bytes that is used for NDB transaction batches.
/global/additionalndbconfigurations/mysqld/ndb_blob_write_batch_bytes This parameter is used to set the size (in bytes) for batching of BLOB data writes. 2000000 Set the size in bytes for batching of BLOB data writes.
/global/additionalndbconfigurations/mysqld/replica_allow_batching Indicates whether or not batched updates are enabled on NDB Cluster replicas.

Batched updates refer to the process of grouping multiple update operations together and sending them to the replica in a single batch, rather than one at a time. This can improve performance by reducing the overhead associated with each individual update operation.

ON The possible values are as follows:
  • ON: Batched updates are enabled on the NDB Cluster replicas.
  • OFF: Batched updates are not enabled on the NDB Cluster replicas.
/global/additionalndbconfigurations/mysqld/max_allowed_packet Specifies the maximum size of one packet in bytes. 134217728 NA
/global/additionalndbconfigurations/mysqld/ndb_log_update_minimal Determines how the updates are logged by minimizing the amount of data written to the binary log. When enabled, it optimizes the logging of updates by only recording the essential information, thereby reducing the size of the log and improving performance. 1 The possible values are as follows:
  • 0: Full row logging is used for updates. This means that the entire row before the update (before image) and the entire row after the update (after image) are logged.
  • 1: Log is updated in a minimal fashion by writing only the primary key values in the before image and only the changed columns in the after image.
/global/additionalndbconfigurations/mysqld/replica_parallel_workers Enables Multithreaded Applier (MTA) on the replica and sets the number of applier threads for running the replication transactions in parallel. 0 The default value "0" indicates that the MTA feature is disabled. To enable MTA, set this parameter to a value greater than 0.
/global/additionalndbconfigurations/mysqld/ndb_log_transaction_dependency This parameter allows the NDB binary logging thread to calculate the transaction dependencies for each transaction that it writes to the binary log. OFF Set this value to ON if replica_parallel_workers is set to 4 and set this value to OFF if replica_parallel_workers is set to 1.
/global/additionalndbconfigurations/mysqld/binlog_transaction_compression Enables compression for transactions that are written to binary log files. ON NA
/global/additionalndbconfigurations/mysqld/binlog_transaction_compression_level_zstd Specifies the level for binary log transaction compression, which is enabled by the binlog_transaction_compression system variable. 3 The value must be an integer that determines the compression effort, where 1 indicates the lowest effort and 22 indicates the highest effort.
/global/additionalndbconfigurations/mysqld/binlog_cache_size This parameter is use to define the size of the memory buffer to hold the changes made to the binary log during a transaction. 10485760 NA
/global/additionalndbconfigurations/mysqld/ndb_report_thresh_binlog_epoch_slip This parameter represents the threshold for the number of epochs completely buffered in the event buffer, but not yet consumed by the binlog injector thread. 50 NA
/global/additionalndbconfigurations/mysqld/ndb_allow_copying_alter_table This parameter controls whether ALTER TABLE and other Data Definition Language (DDL) statements are allowed to use copy operations on NDB tables. OFF NA
/global/additionalndbconfigurations/mysqld/ndb_clear_apply_status This parameter controls the behavior of the RESET SLAVE command with respect to the ndb_apply_status table on NDB cluster replicas. The ndb_apply_status table is used to track the replication status of transactions applied to the replica. OFF The possible values are as follows:
  • ON: Running the RESET SLAVE command causes the NDB cluster replica to purge (delete) all rows from the ndb_apply_status table.
  • OFF: Running the RESET SLAVE command does not purge the ndb_apply_status table.
/global/additionalndbconfigurations/mysqld/replica_net_timeout This parameter specifies the number of seconds to wait for more data or a heartbeat signal from the source before the replica considers the connection broken, aborts the read, and tries to reconnect. 20 NA
/global/additionalndbconfigurations/replmysqld/ndb_eventbuffer_max_alloc This parameter sets the maximum amount of memory (in bytes) that can be allocated for buffering events by the NDB replication API. 0 When set to 0, there is no limit imposed on the amount of memory that can be allocated for the event buffer. This means the buffer can grow as needed based on the volume of events being processed.
/global/additionalndbconfigurations/replmysqld/relay_log_space_limit Specifies the maximum size of relay logs in bytes. 0 When set to 0, there is no limit imposed on the amount of memory that can be used by relay logs on a replica.
/global/additionalndbconfigurations/replmysqld/max_relay_log_size Specifies the maximum size of a relay log file in any replica. 0

When set to 0, the server uses max_binlog_size for both the binary log and the relay log. Therefore, the relay log file size is equal to value specified in the max_relay_log_size parameter.

If a write by a replica to its relay log causes the current log file size to exceed the value of this variable, the replica rotates the relay logs (that is, the replica closes the current file and opens the next one).
/global/additionalndbconfigurations/appmysqld/ndb_eventbuffer_max_alloc This parameter sets the maximum amount of memory (in bytes) that can be allocated for buffering events by the NDB application API. 0 When set to 0, there is no limit imposed on the amount of memory that can be allocated for the event buffer. This means the buffer can grow as needed based on the volume of events being processed.
/global/additionalndbconfigurations/tcp/SendBufferMemory Specifies the memory allocation for the buffer used by TCP transporters to store messages before they are sent to the operating system. 2M When this buffer reaches 64KB, its contents are sent. The contents are also sent when a round off messages are run.
/global/additionalndbconfigurations/tcp/ReceiveBufferMemory Specifies the size of the buffer used when receiving data from the TCP/IP socket. 2M NA
/global/additionalndbconfigurations/tcp/TCP_SND_BUF_SIZE Specifies the size of the buffer set during the initialization of TCP transporters to send data. This buffer size affects the size of data that can be queued for sending over the network at one time. 0 When set to 0, the buffer size is determined by the operating system or platform, which typically chooses an appropriate default size based on system resources and network conditions.
/global/additionalndbconfigurations/tcp/TCP_RCV_BUF_SIZE Specifies the size of the receive buffer set during TCP transporter initialization. 0 When set to 0, the buffer size is determined by the operating system or platform, which typically chooses an appropriate default size based on system resources and network conditions.
/global/additionalndbconfigurations/tcpemptyapi/SendBufferMemory Specifies the memory allocated for the buffer that is used by TCP transporters for ndb_restore process during the georeplication recovery to store the content before sending it to the operating system. 2M When this buffer reaches 64KB, its contents are sent. The contents are also sent when a round off messages are run.
/global/additionalndbconfigurations/tcpemptyapi/ReceiveBufferMemory Specifies the memory allocated for the buffer used when receiving the data from the TCP/IP socket for the ndb_restore process during the georeplication recovery. 2M NA
/global/additionalndbconfigurations/tcpemptyapi/TCP_SND_BUF_SIZE Specifies the memory allocated for the buffer that is set while initializing TCP transporters for ndb_restore process during the georeplication recovery to store data before sending it to the operating system. This buffer size affects the size of data that can be queued for sending over the network at one time. 0 When set to 0, the buffer size is determined by the operating system or platform, which typically chooses an appropriate default size based on system resources and network conditions.
/global/additionalndbconfigurations/tcpemptyapi/TCP_RCV_BUF_SIZE Specifies the memory allocated for the buffer that is set while initializing TCP transporter to receive data for ndb_restore process during the georeplication recovery. 0 When set to 0, the buffer size is determined by the operating system or platform, which typically chooses an appropriate default size based on system resources and network conditions.
/global/mgm/ndbdisksize Allocated disk size for the management node. 15Gi Size of the PVC attached to the management pods.
/global/mgm/startNodeId Starting node ID used for the MGM pods. 49 For example, if the startNodeId is 49, then the first MGM pod node ID is 49 and the second MGM pod node ID is 50, and so on.
/global/ndb/ndbdisksize Allocated disk size for the data node. 60Gi Size of the PVC attached to each data pod for storing the ndb data.
/global/ndb/ndbbackupdisksize Allocated backup disk size for the DB backup service. 60Gi Size of the PVC attached to each data pod for storing the backup data.
/global/ndb/datamemory Data memory size. 12G The size of each data node data memory.
/global/ndb/KeepAliveSendIntervalMs Time between the keep-alive signals on the links between the data nodes (in milliseconds). 60000 The default is 60000 milliseconds (one minute).

Setting this value to 0, disables the keep-alive signals.

Values from 1 to 10 are treated as 10.

/global/ndb/use_separate_backup_disk Indicates whether to use the default backup URI for storing the DB backup files. true Used in conjunction with the separateBackupDataPath variable when set to true, if there is a need to specify a separate disk path to store DB backup files.
/global/ndb/restoreparallelism Indicates the number of parallel transactions to use while restoring data. 128 NA
/global/ndb/retainbackupno The maximum number of backups that is retained in the cnDBTier cluster at any point in time. 3 For example, if this parameter is set to 3, then the system stores only the latest three backups, and purges any older backups to manage storage efficiently.
/global/ndb/EncryptedFileSystem Indicates if the data files in the data nodes are encrypted using TDE. 0 Set this parameter to 1 if you want to enable encrypting data files in the data nodes using TDE.
/global/api/ndbdisksize Allocated disk size for the api node. 100Gi Size of the PVC attached to each SQL or API pod for storing the SQL data and the binlog data.
/global/api/useRamForRelaylog Indicates if RAM is used for storing the relay logs. false When this parameter is set to true, the system creates a disk using the RAM where relay logs are stored.
/global/api/relayLogDiskSize The size of the disk created for storing the relay logs using the RAM in replication SQL pods (ndbmysqld pods). 4Gi If /global/api/useRamForRelaylog is set to true, the memory resources for the replication SQL pods (ndbmysqld) must be increased as per disk size configured in this parameter.
For example, if /global/api/useRamForRelaylog is set to true and the disk size is set to 4Gi, the following memory resources for the replication SQL pods must be increased by relayLogDiskSize (that is 4Gi):
  • .Values.api.resources.limits.memory
  • .Values.api.resources.limits.memory
/global/api/binlogpurgetimer The frequency (in seconds) at which the database replication service checks the binary log size occupied by the replication SQL pods and continues to purge the binary logs in the replication SQL pods. 600 The default value is 600 seconds (10 minutes). This means that, every ten minutes, the replication service checks the binary logs size occupied by the replication SQL pods and continues to purge the binary logs in replication SQL pods.

Note: If the PVC size is small, then reduce the value of this parameter to an appropriate value such that the disk is not filled by binary logs.

/global/api/startNodeId Starting node ID used for the SQL georeplication pods. 56 For example, if the startNodeId is 56, then the first georeplication SQL pod node ID is 56, and the second georeplication SQL pod node ID is 57.
/global/api/startEmptyApiSlotNodeId Starting Node ID to be used while performing the auto disaster recovery procedure pods 222 NA
/global/api/numOfEmptyApiSlots Number of empty API slots added to the cnDBTier Cluster that are used while restoring the cndbtier cluster. 4 NA
/global/api/general_log Indicates if general query log is enabled or disabled. ON Set this parameter to OFF to disable general query log.
/global/api/user This parameter refers to an account that is used to connect to the MySQL server. mysql NA
/global/api/max_connections The maximum number of simultaneous client connections allowed. 4096 NA
/global/api/all_row_changes_to_bin_log

Enabling or disabling the binlogs in ndbmysqld pods, if single site is deployed then binlogs can be disabled.

1- Enable the Binlogs

0-Disabling the binlogs.

1 Use this parameter to enable or disable the binlogs. In single site deployments, it can be used for disabling the binlogs.
/global/api/binlog_expire_logs_seconds binlog expiry in seconds. 86400 Expiry time in seconds for the binlogs in ndbmysqld pods.
/global/api/auto_increment_increment This parameter controls the operation of the AUTO_INCREMENT columns to avoid the auto-increment key collisions. 4 Value of this parameter should be equal to the number of replication sites. If installing 2 site replication, set it to 2 and update the other cnDBTier cluster. If installing 3 site replication, set it to 3 and update the other two cnDBTier clusters. If installing 4 site replication, set it to 4 and also update the other three cnDBTier Clusters.
/global/api/auto_increment_offset This parameter controls the operation of the AUTO_INCREMENT columns to avoid the auto-increment key collisions. 1 Each site should be assigned the unique auto-increment offset value. If installing cnDBTier Cluster1, set it to 1. If installing cnDBTier Cluster2, set it to 2. If installing cnDBTier Cluster3, set it to 3. If installing cnDBTier Cluster4, set it to 4.
/global/api/wait_timeout The number of seconds the server waits for an activity on a non-interactive connection before closing it. 600 NA
/global/api/interactive_timeout The number of seconds the server waits for an activity on an interactive connection before closing it. 600 NA
/global/api/source_connect_retry This parameter defines the time interval (in seconds) for connection retries between the source and replica. 20 NA
/global/api/source_heartbeat_period This parameter controls the heartbeat interval, which stops the connection timeout that occurs during the absence of data, if the connection is good. 5 A heartbeat signal is sent to the replica after the configured time and the waiting period is reset whenever the source's binary log is updated with an event.
/global/api/source_retry_count This parameter is used to set the maximum number of reconnection attempts the replica makes after the connection to the source times out. 5 NA
/global/ndbapp/ndbdisksize Allocated disk size for ndbapp pods. 20Gi Disk allocation size for the ndbapp pods.
/global/ndbapp/ndb_cluster_connection_pool This parameter allows a mysqld process to establish multiple connections to the NDB cluster, effectively acting as multiple SQL nodes. 1 This improves performance on multi-CPU or multi-core host machines by distributing the load across several connections, which are allocated to threads in a round-robin manner.
/global/ndbapp/ndb_cluster_connection_pool_base_nodeid This parameter specifies the base node ID for a set of connection pool nodes. 100 By setting this base node ID, the subsequent connections increments from this base value, ensuring unique identification of each connection within the cluster.
/global/ndbapp/startNodeId Starting node ID used for the SQL non georeplication pods. 70 For example, if the startNodeId is 70, then the first non georeplication SQL pod node ID is 70 and the second non georeplication SQL pod node ID is 71.
/global/services/ipFamilyPolicy This parameter controls how a service is assigned IP families (IPv4 and/or IPv6). SingleStack The SingleStack policy indicates that the service is assigned to only one IP family. The IP family can be either IPv4 or IPv6.
/global/services/primaryDualStackIpFamily This parameter specifies the preferred IP family (IPv4 or IPv6) for a service in a dual-stack network configuration. IPv6 NA
/global/multiplereplicationgroups/enabled Indicates if multiple replication channel groups are disabled or enabled. false Set this value to true to enable multiple replication channel groups.
/global/multiplereplicationgroups/replicationchannelgroups[0]/channelgroupid Replication channel group identifier for each replication channel. 1 Channel group identifier for replication channel group.
/global/multiplereplicationgroups/replicationchannelgroups[0]/binlogdodb List of databases that will be logged in to binary logs of the replication SQL nodes for replicating the data to remote site using these replication channels. {} Replication SQL nodes belonging to this replication channel group records the writes on these databases in their binary logs.
/global/multiplereplicationgroups/replicationchannelgroups[0]/binlogignoredb List of databases that will be not be logged in to binary logs of the replication SQL nodes for this replication channel group. {} Replication SQL nodes belonging to this replication channel group do not record the writes on these databases in their binary logs.
/global/multiplereplicationgroups/replicationchannelgroups[0]/binlogignoretables List of the tables that are not logged in to binary logs of the replication SQL nodes, for the replication channel group. {} Replication SQL nodes that belong to the replication channel group do not record the writes on the tables listed in this parameter, in its binary logs.
/global/multiplereplicationgroups/replicationchannelgroups[0]/sqllist List of SQL nodes that belong to this replication channel group identifier. {} By default, the SQL nodes are configured to each replication channel group. If the SQL nodes of each replication channel group are configured differently in replication service deployments, then this list must be specified.
/global/multiplereplicationgroups/replicationchannelgroups[1]/channelgroupid Replication channel group identifier for each replication channel. 1 Channel group identifier for replication channel group.
/global/multiplereplicationgroups/replicationchannelgroups[1]/binlogdodb List of databases that will be logged in to binary logs of the replication SQL nodes for replicating the data to remote site using these replication channels. {} Replication SQL nodes belonging to this replication channel group records the writes on these databases in their binary logs.
/global/multiplereplicationgroups/replicationchannelgroups[1]/binlogignoredb List of databases that will be not be logged in to binary logs of the replication SQL nodes for this replication channel group. {} Replication SQL nodes belonging to this replication channel group do not record the writes on these databases in their binary logs.
/global/multiplereplicationgroups/replicationchannelgroups[1]/binlogignoretables List of the tables that are not logged in to binary logs of the replication SQL nodes, for the replication channel group. {} Replication SQL nodes that belong to the replication channel group do not record the writes on the tables listed in this parameter, in its binary logs.
/global/multiplereplicationgroups/replicationchannelgroups[1]/sqllist List of SQL nodes that belong to this replication channel group identifier. {} By default, the SQL nodes are configured to each replication channel group. If the SQL nodes of each replication channel group are configured differently in replication service deployments, then this list must be specified.
/global/replicationskiperrors/enable Indicates if replicationerrornumbers is skipped. true Set this parameter to true if you want to skip the configured replication errors, when the replica in all the replication channels with the remote site, encounters configured error in its replica status.
/global/replicationskiperrors/numberofskiperrorsallowed Indicates the number of times the errors can be skipped in the configured time window. 5 Set this parameter to the desired number of times you want to skip the configured replication error.
/global/replicationskiperrors/skiperrorsallowedintimewindow The time interval within which the configured number of allowed skip errors can be skipped. 3600 Set this value to the desired time window value (in seconds) within which you want the replication error to be skipped for the configured number times (numberofskiperrorsallowed).

If replication skip error (replicationskiperrors) is enabled, then replication errors are skipped for the configured number of times (numberofskiperrorsallowed) within 3600 seconds.

/global/replicationskiperrors/epochTimeIntervalLowerThreshold The lower epoch time interval threshold value. 10000

Set this parameter to the lowest value from which the replication errors are skipped if the calculated epoch interval is greater than desired value.

If the calculated epoch interval that need to be skipped is more than the configured threshold, a minor alert is raised. However, this does not decide whether the replication errors can be skipped or not.
/global/replicationskiperrors/epochTimeIntervalHigherThreshold The higher epoch time interval threshold value. 80000

Set this to the desired value beyond which replication error should not be skipped if calculated epoch interval is greater than desired value.

/global/replicationskiperrors/replicationerrornumbers The list of replication errors that must be skipped, when all the replication channels with the remote site encounters error in its replica status. - errornumber: 13119

- errornumber: 1296

- errornumber: 1007

- errornumber: 1008

- errornumber: 1050

- errornumber: 1051

- errornumber: 1054

- errornumber: 1060

- errornumber: 1061

- errornumber: 1068

- errornumber: 1094

- errornumber: 1146

If you want to add more error numbers, add the elements in the following manner:
  • - errornumber: 13119
  • - errornumber: 1296
  • - errornumber: XYZ

Note: Replace XYZ in the sample with the error number.

3.2 Management Parameters

The following table provides a list of management parameters.

Table 3-2 Management Parameters

Parameter Description Default Value Notes
/mgm/inframonitor/image/name Specifies the docker image name of the infra monitor service of the management node. db-infra-monitor-svc NA
/mgm/inframonitor/image/repository Specifies the name of the repository where infra monitor service image is stored. db_infra_monitor_svc NA
/mgm/inframonitor/image/tag Specifies the docker image version of the infra monitor service. 24.3.1 NA
/mgm/inframonitor/image/imagePullPolicy Specifies the image pull policy for the infra monitor service docker image. IfNotPresent NA
/mgm/resources/limits/cpu Max limited CPU count allocated for the management node. 4 Maximum amount of CPU that Kubernetes allows the management pod to use.
/mgm/resources/limits/memory Max limited memory size allocated for the management node. 10Gi Memory limits for each management pod.
/mgm/resources/limits/ephemeral-storage Max limit ephemeral storage size allocated for the management node 1Gi Ephemeral storage Limits for each management pod
/mgm/resources/requests/cpu Indicates the required CPU count allocated for the management node. 4 CPU allotment for each management pod.
/mgm/resources/requests/memory Indicates the required memory size allocated for the management node. 8Gi Memory allotment for each of the management pod.
/mgm/resources/requests/ephemeral-storage Indicates the required Ephemeral storage size allocated for the management node 90Mi Ephemeral storage allotment for each of the management pod
/mgm/annotations Annotations used for management node. - sidecar.istio.io/inject: "true" Annotations in a Kubernetes are key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/mgm/commonlabels Specifies the common labels used for management node. - nf-vendor: oracle

- nf-instance: oc-cndbtier

- nf-type: database - component: database

Common labels are key-value pairs used to categorize and identify Kubernetes objects, helping to manage and organize resources more effectively.

If necessary, change it to appropriate values supported by Kubernetes.

/mgm/anti_pod_affinity Specifies the rules that prevent certain pods from being scheduled on the same nodes as other specified pods. This helps to spread the workload and improve availability.

anti_affinity_topologyKey: kubernetes.io/hostname

anti_affinity_key: dbtierndbnode

anti_affinity_values: - dbtierndbmgmnode

If necessary, change it to appropriate values supported by Kubernetes.
/mgm/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the mgm pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the mgm pods to be scheduled on worker node with label nodetype=mgm, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: mgm

nodeSelector is disabled if this parameter is passed empty.

/mgm/use_pod_affinity_rules This parameter determines whether pod placement in a Kubernetes cluster must consider pod affinity rules. This dictates how pods must be co-located based on certain criteria to optimize resource utilization and workload distribution. false NA
/mgm/pod_affinity Specifies the rules for scheduling pods to be co-located with other pods based on shared attributes, improving service communication and efficiency.

pod_affinity_topologyKey: failure-domain.beta.kubernetes.io/zone

pod_affinity_key: ndbnode

pod_affinity_values: - ndbmgmnode

If necessary, change it to appropriate values supported by Kubernetes.
/mgm/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/mgm/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/mgm/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the mgm pods must be scheduled.
For example, if you want the mgm pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/mgm/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/mgm/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where mgm pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the mgm pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - mgm
    - mgmd

In this case, more preference is given to the worker nodes with label matching to node type mgm or mgmd as this rule has a greater value for weight.

/mgm/service/labels Kubernetes resource which uses labels to organize and manage network access for management pods. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_ndbmgmnode

- cis.f5.com/as3-pool: svc_occne_infra_pool

If necessary, change it to appropriate values supported by Kubernetes.
/mgm/selector This parameter is used to specify a set of criteria for selecting objects based on their labels. - ndbcontroller.mysql.com/v1alpha1: ndb-ndbmgmd If necessary, change it to appropriate values supported by Kubernetes.

3.3 NDB Parameters

The following table provides a list of network database (NDB) parameters.

Table 3-3 NDB Parameters

Parameter Description Default Value Notes
/ndb/sidecar/image/repository Specifies the repository name where DB backup executor service image is stored. db_backup_executor_svc NA
/ndb/sidecar/image/tag Specifies the docker image version of DB backup service installed as a sidecar for managing automatic DB backups. 24.3.1 Change it to the version of the docker registry. For example, 24.3.1.
/ndb/sidecar/image/imagePullPolicy Specifies the image pull policy for the DB backup executor service docker image. IfNotPresent NA
/ndb/sidecar/log/level Specifies the logging level for the DB backup executor service. WARN NA
/ndb/sidecar/resources/limits/cpu Specifies the maximum CPU count allocated for the DB backup executor service. 1 NA
/ndb/sidecar/resources/limits/memory Specifies the maximum memory size allocated for the DB backup executor service. 1Gi NA
/ndb/sidecar/resources/limits/ephemeral-storage Specifies the maximum limit on the ephemeral-storage size allocated for the db backup service, installed as a sidecar for managing automatic DB backups. 1Gi NA
/ndb/sidecar/resources/requests/cpu Specifies the required CPU count allocated for the DB backup executor service. 1 NA
/ndb/sidecar/resources/requests/memory Specifies the required memory size allocated for the DB backup executor service. 1Gi NA
/ndb/sidecar/resources/requests/ephemeral-storage Specifies the required ephemeral- storage size allocated for the DB backup service, installed as a sidecar for managing automatic DB backups. 90Mi  
/ndb/inframonitor/image/name Specifies the docker image name of the infra monitor service of the data node. db-infra-monitor-svc NA
/ndb/inframonitor/image/repository Specifies the name of the repository where the infra monitor service image is stored. db_infra_monitor_svc NA
/ndb/inframonitor/image/tag Specifies the docker image version of the infra monitor service. 24.3.1 NA
/ndb/inframonitor/image/imagePullPolicy Specifies the image pull policy for the infra monitor service docker image. IfNotPresent NA
/ndb/resources/limits/cpu Specifies the maximum limit on CPU count allocated for the data node. 10 Maximum amount of CPU that Kubernetes allows the data pod to use.
/ndb/resources/limits/memory Specifies the maximum limit on memory size allocated for the data node. 18Gi Memory limits for each data pod.
/ndb/resources/limits/ephemeral-storage Indicates the maximum limit of the ephemeral storage that can be allocated for the data node. 1Gi  
/ndb/resources/requests/cpu Indicates the required CPU count allocated for the data node. 10 CPU allotment for each data pod.
/ndb/resources/requests/memory Indicates the required memory size allocated for the data node. 16Gi Memory allotment for each of the data pod.
/ndb/resources/requests/ephemeral-storage Indicates the required ephemeral storage size allocated for the data node 90Mi Ephemeral storage allotment for each of the data pod
/ndb/annotations Specifies the annotations used for the data nodes. - sidecar.istio.io/inject: "true" Annotations in a Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/ndb/commonlabels Specifies the common labels used for the data nodes.

- nf-vendor: oracle

- nf-instance: oc-cndbtier

- nf-type: database

- component: database

Common labels are the key-value pairs used to categorize and identify Kubernetes objects. This helps to manage and organize resources more effectively.

If necessary, change it to appropriate values supported by Kubernetes.

/ndb/anti_pod_affinity Specifies the rules that prevent certain pods from being scheduled on the same nodes as other specified pods. This helps to spread the workload and improve availability.

anti_affinity_topologyKey: kubernetes.io/hostname

anti_affinity_key: dbtierndbnode

anti_affinity_values: - dbtierndbdatanode

If necessary, change it to appropriate values supported by Kubernetes.
/ndb/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the ndb pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the ndb pods to be scheduled on worker node with label nodetype=ndb, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: ndb

nodeSelector is disabled if this parameter is passed empty.

/ndb/use_pod_affinity_rules Determines whether pod placement in a Kubernetes cluster must consider pod affinity rules. These rules dictate how pods must be co-located based on certain criteria to optimize resource utilization and workload distribution. false NA
/ndb/pod_affinity Specifies the rules for scheduling pods to be co-located with other pods based on shared attributes, improving service communication and efficiency.

pod_affinity_topologyKey: failure-domain.beta.kubernetes.io/zone

pod_affinity_key: ndbnode

pod_affinity_values: - ndbdatanode

If necessary, change it to appropriate values supported by Kubernetes.
/ndb/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/ndb/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/ndb/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the ndb pods must be scheduled.
For example, if you want the ndb pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/ndb/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/ndb/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where ndb pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the ndb pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - ndb
    - ndbmtd

In this case, more preference is given to the worker nodes with label matching to node type ndb or ndbmtd as this rule has a greater value for weight.

/ndb/service/labels Kubernetes resource which uses labels to organize and manage network access for data pods. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode

- cis.f5.com/as3-pool: svc_occne_infra_pool

If necessary, change it to appropriate values supported by Kubernetes.
/ndb/selector This parameter is used to specify a set of criteria for selecting objects based on their labels. - ndbcontroller.mysql.com/v1alpha1: ndb-ndbmtd If necessary, change it to appropriate values supported by Kubernetes.

3.4 API Parameters

The following table provides a list of Application Programming Interface (API) parameters.

Table 3-4 API Parameters

Parameter Description Default Value Notes
/api/inframonitor/image/name Specifies the docker image name of the infra monitor service of the replication SQL node. db-infra-monitor-svc NA
/api/inframonitor/image/repository Specifies the name of the repository where the infra monitor service image is stored. db_infra_monitor_svc NA
/api/inframonitor/image/tag Specifies the docker image version of the infra monitor service. 24.3.1 NA
/api/inframonitor/image/imagePullPolicy Specifies the image pull policy for the infra monitor service docker image. IfNotPresent NA
/api/resources/limits/cpu Maximum limit on the CPU count allocated for the api node. 8 Maximum amount of CPU that Kubernetes allows the SQL or API pod to use.
/api/resources/limits/memory Maximum limit on the memory size allocated for the api node. 10Gi Memory limits for each SQL or API pod.
/api/resources/limits/ephemeral-storage Maximum limit ephemeral-storage size allocated for the api node. 1Gi Ephemeral Storage Limits for each SQL or API pod.
/api/resources/requests/cpu Required CPU count allocated for the api node. 8 CPU allotment for each SQL or API pod.
/api/resources/requests/memory Required memory size allocated for the api node. 10Gi Memory allotment for each of the SQL or API pod.
/api/resources/requests/ephemeral-storage Required ephemeral-storage size allocated for the api node. 90Mi Ephemeral Storage allotment for each of the SQL or API pod.
/api/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/api/annotations Specifies the annotations used for the replication SQL node. - sidecar.istio.io/inject: "true" Annotations in a Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/api/commonlabels Specifies the common labels used for the replication SQL node.

- nf-vendor: oracle

- nf-instance: oc-cndbtier

- nf-type: database

- component: database

Common labels are the key-value pairs used to categorize and identify Kubernetes objects. This helps to manage and organize resources more effectively.

If necessary, change it to appropriate values supported by Kubernetes.

/api/ndbWaitTimeout Indicates the time the ndbmtd pods wait for the ndb_mgmd pods to come online. 600 The maximum time the ndbmtd pods wait for the ndb_mgm pods to come online.
/api/waitforndbmtd Indicates whether the ndbmtd pod waits for the mgm pods to come online before starting its process. true Boolean value representing whether the ndbmtd pod waits for the mgm pods to come online before starting its process.
/api/initsidecar/name Specifies the name of the initsidecar container of the replication SQL node. init-sidecar NA
/api/initsidecar/image/repository Name of the docker image of MySQL NDB client. cndbtier-mysqlndb-client Change it to the docker image name on your docker registry, for example, cndbtier-mysqlndb-client.
/api/initsidecar/image/tag Version for the docker image of MySQL NDB client. 24.3.1 Change it to the version of the docker image. For example, 24.3.1.
/api/initsidecar/image/imagePullPolicy Specifies the image pull policy for the MySQL ndb client docker image. IfNotPresent NA
/api/initSidecarResources/limits/cpu The maximum limit on the CPU count allocated for mysqlndbclient. 0.1 NA
/api/initSidecarResources/limits/memory The maximum limit on the memory size allocated for the mysqlndbclient. 256Mi NA
/api/initSidecarResources/limits/ephemeral-storage The maximum limit on the ephemeral-storage size allocated for the mysqlndbclient. 1Gi NA
/api/initSidecarResources/requests/cpu Specifies the required CPU count allocated for mysqlndbclient. 0.1 NA
/api/initSidecarResources/requests/memory Specifies the required memory size allocated for mysqlndbclient. 256Mi NA
/api/initSidecarResources/requests/ephemeral-storage Specifies the required ephemeral-storage size allocated for the mysqlndbclient. 1Gi NA
/api/anti_pod_affinity Specifies rules that prevent certain pods from being scheduled on the same nodes as other specified pods. This helps to spread the workload and improve availability.

anti_affinity_topologyKey: kubernetes.io/hostname

anti_affinity_key: dbtierndbnode

anti_affinity_values: - dbtierndbsqlnode

If necessary, change it to appropriate values supported by Kubernetes.
/api/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the api pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the api pods to be scheduled on worker node with label nodetype=api, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: api

nodeSelector is disabled if this parameter is passed empty.

/api/use_pod_affinity_rules Determines whether pod placement in a Kubernetes cluster must consider pod affinity rules. These rules dictate how pods must be co-located based on certain criteria to optimize resource utilization and workload distribution. false NA
/api/pod_affinity Specifies the rules for scheduling pods to be co-located with other pods based on shared attributes, improving service communication and efficiency.

pod_affinity_topologyKey: failure-domain.beta.kubernetes.io/zone

pod_affinity_key: ndbnode

pod_affinity_values: - ndbsqlnode

If necessary, change it to appropriate values supported by Kubernetes.
/api/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/api/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/api/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key 
 keyvalues: 
   - customvalue1   
   - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the ndbmysqld API pods must be scheduled.
For example, if you want the ndbmysqld API pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/api/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/api/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where ndbmysqld API pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the ndbmysqld pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - api
    - ndbmysqld

In this case, more preference is given to the worker nodes with label matching to node type api or ndbmysqld as this rule has a greater value for weight.

/api/service/labels Kubernetes resource which uses labels to organize and manage network access for SQL pods. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode

- cis.f5.com/as3-pool: svc_occne_infra_pool

If necessary, change it to appropriate values supported by Kubernetes.
/api/externalService/type Specifies the type of external service. LoadBalancer NA
/api/externalService/annotations Specifies the annotations that are used for external service. metallb.universe.tf/address-pool: oam oracle.com.cnc/app-protocols: '{"tcp":"TCP"}' Annotations in a Kubernetes are key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/api/externalService/sqlgeorepsvclabels[0]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-0. "" Configure the LoadBalancer IP for ndbmysqldsvc-0.
/api/externalService/sqlgeorepsvclabels[0]/annotations Annotations for ndbmysqldsvc-0 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-0 LoadBalancer service.
/api/externalService/sqlgeorepsvclabels[0]/labels Kubernetes resource that uses labels to organize and manage network access for ndbmysqldsvc-0. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode0

- cis.f5.com/as3-pool: svc_occne_infra_pool0

If necessary, change it to appropriate values supported by Kubernetes.
/api/externalService/sqlgeorepsvclabels[1]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-1 "" Configure the LoadBalancer IP for ndbmysqldsvc-1
/api/externalService/sqlgeorepsvclabels[1]/annotations Annotations for ndbmysqldsvc-1 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-1 LoadBalancer service.
/api/externalService/sqlgeorepsvclabels[1]/labels Kubernetes resource that uses labels to organize and manage network access for ndbmysqldsvc-1. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode1

- cis.f5.com/as3-pool: svc_occne_infra_pool1

If necessary, change it to appropriate values supported by Kubernetes.
/api/externalService/sqlgeorepsvclabels[2]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-2 "" Configure the LoadBalancer IP for ndbmysqldsvc-2
/api/externalService/sqlgeorepsvclabels[2]/annotations Annotations for ndbmysqldsvc-2 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-2 LoadBalancer service.
/api/externalService/sqlgeorepsvclabels[2]/labels Kubernetes resource that uses labels to organize and manage network access for ndbmysqldsvc-2. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode2

- cis.f5.com/as3-pool: svc_occne_infra_pool2

If necessary, change it to appropriate values supported by Kubernetes.
/api/externalService/sqlgeorepsvclabels[3]/loadBalancerIP Fixed LoadBalncer IP for ndbmysqldsvc-3. "" Configure the LoadBalancer IP for ndbmysqldsvc-3
/api/externalService/sqlgeorepsvclabels[3]/annotations Annotations for ndbmysqldsvc-3 LoadBalancer service. {} Configure the different annotations for ndbmysqldsvc-3 LoadBalancer service.
/api/externalService/sqlgeorepsvclabels[3]/labels Kubernetes resource that uses labels to organize and manage network access for ndbmysqldsvc-3. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode3

- cis.f5.com/as3-pool: svc_occne_infra_pool3

If necessary, change it to appropriate values supported by Kubernetes.
/api/externalService/labels Kubernetes resource that uses labels to organize and manage network access for external services. The labels attached to the service help identify and categorize it for integration with other systems.

- app1: occne_infra

- cis.f5.com/as3-tenant1: occne_infra

- cis.f5.com/as3-app1: svc_occne_infra_sqlnode

- cis.f5.com/as3-pool1: svc_occne_infra_pool

If necessary, change it to appropriate values supported by Kubernetes.
/api/connectivityService/name Specifies the name of the connectivity service. mysql-connectivity-service NA
/api/connectivityService/multus/enable If this is enabled, connectivity service end points are generated from the multus IPs false Set this to true if the:
  • multus is enabled globally
  • SQL pods are configured with the multus annotation
  • connectivity service needs to be created from the multus IPs
/api/connectivityService/multus/networkAttachmentDefinationApiName The API version for NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to appropriate values supported by Kubernetes.
/api/connectivityService/multus/networkAttachmentDefinationTagName Provide the NAD file name for the connectivity service. "" Give the name of the NAD, which the connectivity service will use to get the multus IP from the SQL pods.
/api/connectivityService/labels Kubernetes resource that uses labels to organize and manage network access for connectivity services. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode

- cis.f5.com/as3-pool: svc_occne_infra_pool

If necessary, change it to appropriate values supported by Kubernetes.
/api/connectivityService/selector This parameter for connectivity service is used to specify a set of criteria for selecting objects based on their labels. - isNodeForConnectivity: "true" NA
/api/externalconnectivityService/enable Indicates if external connectivity service is enabled. false NA
/api/externalconnectivityService/selector This parameter for external connectivity service is used to specify a set of criteria for selecting objects based on their labels. - isNodeForExternalConnectivity: "false" NA
/api/ndbapp/anti_pod_affinity Specifies the rules that prevent certain pods from being scheduled on the same nodes as other specified pods, helping to spread the workload and improve availability.

anti_affinity_topologyKey: kubernetes.io/hostname

anti_affinity_key: dbtierndbnode

anti_affinity_values: - dbtierndbappnode

If necessary, change it to appropriate values supported by Kubernetes.
/api/ndbapp/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the ndbapp pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the ndbapp pods to be scheduled on worker node with label nodetype=ndbapp, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: ndbapp

nodeSelector is disabled if this parameter is passed empty.

/api/ndbapp/use_pod_affinity_rules Determines whether pod placement in a Kubernetes cluster must consider pod affinity rules. These rules dictate how pods must be co-located based on certain criteria to optimize resource utilization and workload distribution. false NA
/api/ndbapp/pod_affinity Specifies the rules for scheduling pods to be co-located with other pods based on shared attributes, improving service communication and efficiency.

pod_affinity_topologyKey: failure-domain.beta.kubernetes.io/zone

pod_affinity_key: ndbnode

pod_affinity_values: - ndbappnode

If necessary, change it to appropriate values supported by Kubernetes.
/api/ndbapp/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/api/ndbapp/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/api/ndbapp/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the ndbappmysqld API pods must be scheduled.
For example, if you want the ndbappmysqld API pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/api/ndbapp/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/api/ndbapp/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where ndbappmysqld pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the ndbappmysqld pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - api
    - ndbappmysqld

In this case, more preference is given to the worker nodes with label matching to node type api or ndbappmysqld as this rule has a greater value for weight.

/api/ndbapp/horizontalPodAutoscaler/memory/enabled Enable horizontal pod autoscaling on the basis of memory consumption. true If enabled, then the horizontal pod autoscaling is done on the basis of memory consumption of the ndbappmysqld pods.
/api/ndbapp/horizontalPodAutoscaler/memory/averageUtilization Defines the percentage of average memory utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling. 80 Defines the percentage of average memory utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling.
/api/ndbapp/horizontalPodAutoscaler/cpu/enabled Enable horizontal pod autoscaling on basis of CPU consumption. false If enabled, then the horizontal pod autoscaling is done on the basis of CPU consumption of the mysqldndbapp pods.
/api/ndbapp/horizontalPodAutoscaler/cpu/averageUtilization States the percentage of average CPU utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling. 80 Defines the percentage of average CPU utilization of the ndbappmysqld pods which triggers the horizontal pod autoscaling.
/api/ndbapp/resources/limits/cpu Maximum CPU count limit allocated for the SQL or API node not participating in georeplication. 8 Maximum amount of CPU that Kubernetes allows the SQL or API node, that is not participating in georeplication, to use.
/api/ndbapp/resources/limits/memory Maximum memory size limit allocated for the SQL or API node not participating in georeplication. 10Gi Memory limit for each SQL or API node not participating in georeplication.
/api/ndbapp/resources/limits/ephemeral-storage Maximum limit ephemeral-storage size allocated for the SQL or API node not participating in georeplication. 1Gi Ephemeral storage Limits for each SQL or node not participating in geo replication.
/api/ndbapp/resources/requests/cpu Required CPU count allocated for the API or SQL node not participating in georeplication. 8 CPU allotment for each SQL or API node not participating in georeplication.
/api/ndbapp/resources/requests/memory Required memory size allocated for the API or SQL node not participating in georeplication. 10Gi Memory allotment for each of the SQL or API node not participating in georeplication.
/api/ndbapp/resources/requests/ephemeral-storage Required Ephemeral storage size allocated for the SQL or API node not participating in georeplication. 90Mi Ephemeral storage allotment for each of the SQL or API node not participating in georeplication.
/api/ndbapp/annotations Specifies the annotations used for the application SQL node. - sidecar.istio.io/inject: "true" Annotations in a Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/api/ndbapp/commonlabels Specifies the common labels used for the application SQL node.

- nf-vendor: oracle

- nf-instance: oc-cndbtier

- nf-type: database

- component: database

Common labels are the key-value pairs used to categorize and identify Kubernetes objects. This helps to manage and organize resources more effectively.

If necessary, change it to appropriate values supported by Kubernetes.

/api/ndbapp/service/labels Kubernetes resource that uses labels to organize and manage network access for application SQL pods. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode

- cis.f5.com/as3-pool: svc_occne_infra_pool

If necessary, change it to appropriate values supported by Kubernetes.
/api/ndbapp/connectivityService/name Specifies the name of the connectivity service. mysql-connectivity-service NA
/api/ndbapp/connectivityService/labels Kubernetes resource that uses labels to organize and manage network access for connectivity services. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_sqlnode

- cis.f5.com/as3-pool: svc_occne_infra_pool

If necessary, change it to appropriate values supported by Kubernetes.
/api/ndbapp/connectivityService/selector This parameter for connectivity service is used to specify a set of criteria for selecting objects based on their labels. - isNodeForConnectivity: "true" NA
/api/ndbapp/connectivityService/usendbappselector This selector determines if the connectivity service points to non georeplication pods only or all SQL pods. true Change the value to true, if you want the connectivity service to point to the non georeplication pods only. The false value indicates the connectivity SVC to point to all SQL pods.
/api/ndbapp/connectivityService/ndbappconnetselector This parameter is used to specify a set of criteria for selecting objects based on their labels. - ConnectNodeForConnectivity: "ndbapp" NA
/api/ndbapp/externalconnectivityService/enable Indicates if connectivity service is enabled. false NA
/api/ndbapp/externalconnectivityService/name Specifies the name of the external service. mysql-external-connectivity-service NA
/api/ndbapp/externalconnectivityService/type Specifies the type of external service. LoadBalancer NA
/api/ndbapp/externalconnectivityService/loadBalancerIP Fixed LoadBalncer IP for mysql-external-connectivity-service. "" Configure the LoadBalancer IP for mysql-external-connectivity-service.
/api/ndbapp/externalconnectivityService/annotations Specifies the annotations used for external services. metallb.universe.tf/address-pool: oam oracle.com.cnc/app-protocols: '{"tcp":"TCP"}' Annotations in a Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/api/ndbapp/externalconnectivityService/labels Kubernetes resource that uses labels to organize and manage network access for connectivity services. The labels attached to the service help identify and categorize it for integration with other systems.

- app: occne_infra

- cis.f5.com/as3-tenant: occne_infra

- cis.f5.com/as3-app: svc_occne_infra_external_connect_svc

- cis.f5.com/as3-pool: svc_occne_infra_external_connect_pool

If necessary, change it to appropriate values supported by Kubernetes.
/api/ndbapp/externalconnectivityService/selector This parameter for external connectivity service is used to specify a set of criteria for selecting objects based on their labels. - isNodeForExternalConnectivity: "true" NA

3.5 DB Replication Service Parameters

The following table provides a list of database replication service parameters.

Table 3-5 DB Replication Service Parameters

Parameter Description Default Value Notes
/db-replication-svc/enabled This parameter enables the replication service. true NA
/db-replication-svc/nodeSelector List of node selector labels that correspond to the Kubernetes nodes where the db-replication-svc pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the db-replication-svc pods to be scheduled on worker node with label nodetype=replsvc, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: replsvc

nodeSelector is disabled if this parameter is passed empty.

/db-replication-svc/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/db-replication-svc/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/db-replication-svc/nodeAffinity/requiredDuringScheduling/affinitykeyvalues List of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the backup manager service pods must be scheduled.
For example, if you want the replication service pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/db-replication-svc/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/db-replication-svc/nodeAffinity/preferredDuringScheduling/expressions List of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where replication service pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the replication service pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - dbtierservice
    - replication

In this case, more preference is given to the worker nodes with label matching to node type dbtierservice or replication as this rule has a greater value for weight.

/db-replication-svc/useClusterIpForReplication This parameter controls the setup mode for db_replication_svc. This includes deciding if the system uses LoadBalancer's CLUSTER-IP or EXTERNAL-IP for replication. false NA
/db-replication-svc/image/repository Specifies the name of the repository where replication service image is stored. db_replication_svc NA
/db-replication-svc/image/tag Specifies the docker image version of the replication service. 24.3.1 NA
/db-replication-svc/image/pullPolicy Specifies the image pull policy for the replication service docker image. IfNotPresent NA
/db-replication-svc/dbreplsvcdeployments[0]/name Name of the replication service combination of site name and mate site name. cndbtiersitename-cndbtierfirstmatesitename-replication-svc

Replace <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_MATE_SITE_NAME you used.

For example: cndbtiersitename-cndbtierfirstmatesitename-replication-svc

/db-replication-svc/dbreplsvcdeployments[0]/enabled Set this parameter to true if you want the leader replication service to be enabled. false Set this parameter to true if you want the replication service pod to exist.

Note: This parameter must be set to true if secure transfer of backup to remote server is enabled or if you want to enable replication across multiple sites.

/db-replication-svc/dbreplsvcdeployments[0]/multus/enable Set it to true if you want to use the multus IP to communicate remote site rather than the loadbalancer IP. false If given true then the replication svc from the local site will communicate to the remote site using the multus IP rather than the load balancer IP.
/db-replication-svc/dbreplsvcdeployments[0]/multus/networkAttachmentDefinationApiName Specifies the API version for the NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[0]/multus/networkAttachmentDefinationTagName Provide the name of the NAD file name which has been given as pod annotation to the replication pod. "" Give the same Network attachment defination file name which has been given as pod annotation to the replication deployment.
/db-replication-svc/dbreplsvcdeployments[0]/podDisruptionBudget/enabled Determines if PodDisruptionBudget is enabled. true Pod Disruption Budget (PDB) in Kubernetes is a policy that limits the number of concurrent disruptions that can affect a collection of pods. This ensures that the application maintains a certain level of availability even during voluntary disruptions, such as upgrades or maintenance.
/db-replication-svc/dbreplsvcdeployments[0]/podDisruptionBudget/maxUnavailable Specifies the maximum number of pods that can be unavailable during a disruption. 1 The default value "1" indicates that, at most one pod can be unavailable at any time.
/db-replication-svc/dbreplsvcdeployments[0]/podDisruptionBudget/labels Optional section to specify labels to select the pods that the PDB applies to. {} An empty dictionary {} means that PDB applies to all pods in the deployment or replica set.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/dbtierservice Specifies the name of the MySQL connectivity service. mysql-connectivity-service NA
/db-replication-svc/dbreplsvcdeployments[0]/mysql/dbtierreplservice Specifies the name of the MySQL replication service. ndbmysqldsvc NA
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primaryhost The CLUSTER-IP from the ndbmysqldsvc-0 LoadBalancer service. ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-0.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-0.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-0 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-0 with cnc-db-prod-ndbmysqld-0.

/db-replication-svc/dbreplsvcdeployments[0]/mysql/port Specifies the port number used by the service to listen for network connections. 3306 NA
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-0 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-0 service, which is used for the establishing the primary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primarysignalhostmultusconfig/multusEnabled If set to true, then the primary replication channel is setup using the multus IP. false Set it to true if you want the primary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primarysignalhostmultusconfig/networkAttachmentDefinationApiName Specifies the API version for NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primarysignalhostmultusconfig/networkAttachmentDefinationTagName Specifies the NAD file name that is used to identify the multus IP from the ndbmysqld pods. This file name is then used it to set to the primary replication channel. "" If set, then cnDBTier uses the same name to identify the multus IP from the ndbmysqld pods and uses the same IP to set up the primary replication channel.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/primaryhostserverid The unique SQL server id for the primary sql node. 1000 For the primary SQL node of site 1, set it to 1000. For the primary SQL node site 2, set it to 2000. For the primary SQL node site 3, set it to 3000.
Calculate the server ID using the following formula:
server_id = siteid * 1000 + ndbmysqld_pod_index
For exmaple, if site ID = 1, then:
  • server ID for ndbmysqld-0 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 0 = 1000
  • server ID for ndbmysqld-1 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 1 = 1001
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondaryhost The CLUSTER-IP from the ndbmysqldsvc-1 LoadBalancer service. ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-1.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-1.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-1 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-1 with cnc-db-prod-ndbmysqld-1.

/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-1 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-1 service, which is used for the establishing the secondary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondarysignalhostmultusconfig/multusEnabled If set to true then the secondary replication channel will be setup using the multus IP. false Set it to true if you want the secondary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondarysignalhostmultusconfig/networkAttachmentDefinationApiName Specifies the API version for NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondarysignalhostmultusconfig/networkAttachmentDefinationTagName Specifies the NAD file name that is used to identify the multus IP from the ndbmysqld pods. This file name is then used it to set to the secondary replication channel. "" If set, then cnDBTier uses the same name to identify the multus IP from the ndbmysqld pods and uses the same IP to set up the secondary replication channel.
/db-replication-svc/dbreplsvcdeployments[0]/mysql/secondaryhostserverid The unique SQL server id for the secondary sql node. 1001 For the primary SQL node of site 1, set it to 1001. For the primary SQL node site 2, set it to 2001. For the primary SQL node site 3, set it to 3001.
Calculate the server ID using the following formula:
server_id = siteid * 1000 + ndbmysqld_pod_index
For exmaple, if site ID = 1, then:
  • server ID for ndbmysqld-0 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 0 = 1000
  • server ID for ndbmysqld-1 = siteid * 1000 + ndbmysqld_pod_index = 1 * 1000 + 1 = 1001
/db-replication-svc/dbreplsvcdeployments[0]/replication/localsiteip Local site replication service external IP assigned to the replication service. "" EXTERNAL-IP ip address assigned to the <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc in current Site.
/db-replication-svc/dbreplsvcdeployments[0]/replication/localsiteport Local site port for the current site that is being installed. "80"  
/db-replication-svc/dbreplsvcdeployments[0]/replication/channelgroupid Replication channel group ID of the replication service pod that handles the configuration and monitoring of the replication channels of the primary and secondary SQL nodes which belong to this group ID. 1 Channel group identifier of the replication channel group.

Valid values: 1, 2

/db-replication-svc/dbreplsvcdeployments[0]/replication/matesitename The mate site for the current site that is being installed. cndbtierfirstmatesitename

replace <${OCCNE_MATE_SITE_NAME}> with the OCCNE_MATE_SITE_NAME you used.

For example, cndbtierfirstmatesitename.

/db-replication-svc/dbreplsvcdeployments[0]/replication/remotesiteip The mate site replication service external IP for establishing georeplication. ""

For deploying cndbtier site1, use ""; For deploying cndbtier site2, use EXTERNAL-IP from site1 occne-db-replication-svc LoadBalancer service.

For deploying cndbtier site3 use EXTERNAL-IP from site1 occne-db-replication-svc LoadBalancer service.

Use the value from the OCCNE_MATE_REPLICATION_SVC Environment variable.

/db-replication-svc/dbreplsvcdeployments[0]/replication/remotesiteport Mate site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[0]/pvc/name Name of the pvc which replication service uses for fault recovery. pvc-cndbtiersitename-cndbtierfirstmatesitename-replication-svc Replace pvc-<${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_MATE_SITE_NAME. For example: pvc-cndbtiersitename-cndbtierfirstmatesitename-replication-svc
/db-replication-svc/dbreplsvcdeployments[0]/pvc/disksize Size of the disksize which is used to store the backup retrieved from the remote site and data nodes. 60Gi Size of the PVC to store backup retrieved from the remote site and data nodes.
/db-replication-svc/dbreplsvcdeployments[0]/labels Provide specific pod labels apart from commonlabels. {} Set the labels for db-replication apart from commonlabels like in this format. For example: app-home: cndbtier
/db-replication-svc/dbreplsvcdeployments[0]/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[0]/podAnnotations This parameter allows to add metadata annotations to the pods within a pod or deployment. {} NA
/db-replication-svc/dbreplsvcdeployments[0]/schedulertimer The timer (in seconds) used for managing replication channels and heartbeat exchanges with remote sites. 5s This time performs the following functions:
  • Setup Replication Channels: Periodically initiates and configures replication channels.
  • Monitor Replication Channels: Continuously checks the status and health of active replication channels.
  • Exchange Heartbeats: Sends and receives heartbeat signals with remote sites to maintain connectivity and detect failures.

Note: Consult with the engineering before making any change to this timer.

/db-replication-svc/dbreplsvcdeployments[0]/log/level Specifies the logging level for the replications service deployment. INFO NA
/db-replication-svc/dbreplsvcdeployments[0]/service/type Specifies the type of the replication service deployment. LoadBalancer NA
/db-replication-svc/dbreplsvcdeployments[0]/service/loadBalancerIP Fixed LoadBalncer IP for <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc "" Configure the LoadBalancer IP that must be assigned to <${OCCNE_SITE_NAME}>-<${OCCNE_MATE_SITE_NAME}>-replication-svc
/db-replication-svc/dbreplsvcdeployments[0]/service/port This parameter specifies the port number used by the replication service deployment to listen for network connections. 80 NA
/db-replication-svc/dbreplsvcdeployments[0]/service/labels Labels to organize and manage network access for replication service deployment. The labels attached to the service help identify and categorize it for integration with other systems.

app: occne_infra

cis.f5.com/as3-tenant: occne_infra

cis.f5.com/as3-app: svc_occne_infra_db_repl_svc_1

cis.f5.com/as3-pool: svc_occne_infra_db_repl_svc_pool1

If necessary, change it to appropriate values supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[0]/service/annotations Annotations used for replication service deployment. metallb.universe.tf/address-pool: oam oracle.com.cnc/app-protocols: '{"http":"TCP"},{"sftp":"TCP"}' Annotations in Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/db-replication-svc/dbreplsvcdeployments[1]/name Name of the replication service combination of site name and second mate site name. cndbtiersitename-cndbtiersecondmatesitename-replication-svc

replace <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_SECOND_MATE_SITE_NAME you used.

For example, chicago-pacific-replication-svc.

/db-replication-svc/dbreplsvcdeployments[1]/enabled Incase of 3 site replication second mate site exists for each site so enabled will be true. true

In case of 3 site replication second mate site exists for each site so enabled will be true.

In case of 2 site replication only one mate site exists so enabled will be false.

/db-replication-svc/dbreplsvcdeployments[1]/multus/enable Determines if Multus IP is used to communicate to remote sites. false Set it to true if you want to use the Multus IP to communicate remote site rather than the loadbalancer IP.
/db-replication-svc/dbreplsvcdeployments[1]/multus/networkAttachmentDefinationApiName This parameter specifies the API version for the NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[1]/multus/networkAttachmentDefinationTagName The name of the NAD file which is given as pod annotation to the replication pod. "" You must provide the same NAD file name which is given as pod annotation to the replication deployment.
/db-replication-svc/dbreplsvcdeployments[1]/podDisruptionBudget/enabled Determines if PodDisruptionBudget is enabled. true Pod Disruption Budget (PDB) in Kubernetes is a policy that limits the number of concurrent disruptions that can affect a collection of pods. This ensures that the application maintains a certain level of availability even during voluntary disruptions, such as upgrades or maintenance.
/db-replication-svc/dbreplsvcdeployments[1]/podDisruptionBudget/maxUnavailable Specifies the maximum number of pods that can be unavailable during a disruption. 1 The default value "1" indicates that, at most one pod can be unavailable at any time.
/db-replication-svc/dbreplsvcdeployments[1]/podDisruptionBudget/labels Optional section to specify labels to select the pods that the PDB applies to. {} An empty dictionary {} means that PDB applies to all pods in the deployment or replica set.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/dbtierservice Specifies the name of the MySQL connectivity service. mysql-connectivity-service NA
/db-replication-svc/dbreplsvcdeployments[1]/mysql/dbtierreplservice Specifies the name of the MySQL replication service. ndbmysqldsvc NA
/db-replication-svc/dbreplsvcdeployments[1]/mysql/primaryhost The CLUSTER-IP from the ndbmysqldsvc-2 LoadBalancer service. ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-2.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-2.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-2 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-2 with cnc-db-prod-ndbmysqld-2.

/db-replication-svc/dbreplsvcdeployments[1]/mysql/port Specifies the port number used by the service to listen for network connections. 3306 NA
/db-replication-svc/dbreplsvcdeployments[1]/mysql/primarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-2 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-2 service, which is used for the establishing the primary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/primarysignalhostmultusconfig/multusEnabled If set to true, then the primary replication channel is setup using the multus IP. false Set it to true if you want the primary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/primarysignalhostmultusconfig/networkAttachmentDefinationApiName Specifies the API version for NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/primarysignalhostmultusconfig/networkAttachmentDefinationTagName Specifies the NAD file name that is used to identify the multus IP from the ndbmysqld pods. This file name is then used it to set to the primary replication channel. "" If set, then cnDBTier uses the same name to identify the multus IP from the ndbmysqld pods and uses the same IP to set up the primary replication channel.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/primaryhostserverid The unique SQL server id for the primary sql node. 1002 For site 1 primary sql node, set it to 1002, for site 2 primary sql node, set it to 2002, for site 3 primary sql node, set it to 3002.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondaryhost The CLUSTER-IP from the ndbmysqldsvc-3 LoadBalancer service. ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-3.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-3.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-3 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-3 with cnc-db-prod-ndbmysqld-3.

/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-3 LoadBalancer service. "" EXTERNAL-IP address assigned to the ndbmysqldsvc-3 service, which is used for the establishing the secondary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondarysignalhostmultusconfig/multusEnabled If set to true then the secondary replication channel will be setup using the multus IP. false Set it to true if you want the secondary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondarysignalhostmultusconfig/networkAttachmentDefinationApiName Specifies the API version for NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondarysignalhostmultusconfig/networkAttachmentDefinationTagName Specifies the NAD file name that is used to identify the multus IP from the ndbmysqld pods. This file name is then used it to set to the secondary replication channel. "" If set, then cnDBTier uses the same name to identify the multus IP from the ndbmysqld pods and uses the same IP to set up the secondary replication channel.
/db-replication-svc/dbreplsvcdeployments[1]/mysql/secondaryhostserverid The unique SQL server id for the secondary sql node. 1003 For site 1 primary sql node, set it to 1003, for site 2 primary sql node, set it to 2003, for site 3 secondary sql node, set it to 3003.
/db-replication-svc/dbreplsvcdeployments[1]/replication/localsiteip Local site replication service external IP assigned to the replication service. "" EXTERNAL-IP ip address assigned to the <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc in current Site.
/db-replication-svc/dbreplsvcdeployments[1]/replication/localsiteport Local site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[1]/replication/channelgroupid Replication channel group ID of the replication service pod that handles the configuration and monitoring of the replication channels of the primary and secondary SQL nodes which belong to this group ID. 1 Channel group identifier of the replication channel group.

Valid values: 1, 2

/db-replication-svc/dbreplsvcdeployments[1]/replication/matesitename second mate site for the current site that is being installed. cndbtiersecondmatesitename

replace <${OCCNE_SECOND_MATE_SITE_NAME}> with the OCCNE_SECOND_MATE_SITE_NAMEyou used.

For example: pacific

/db-replication-svc/dbreplsvcdeployments[1]/replication/remotesiteip Mate site replication service external IP for establishing geo-replication. ""

If deploying cndbtier site1, use ""; if deploying cndbtier site2, use "".

If deploying cndbtier site3 use EXTERNAL-IP from site2 occne-db-replication-svc LoadBalancer service.

Use the value from the

SECOND_MATE_REPLICATION_SVC

Environment variable.

/db-replication-svc/dbreplsvcdeployments[1]/replication/remotesiteport Mate site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[1]/labels Provide specific pod labels apart from commonlabels. {} Set the labels for db-replication apart from commonlabels like in this format.

For example: app-home: cndbtier

/db-replication-svc/dbreplsvcdeployments[1]/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[1]/podAnnotations This parameter allows to add metadata annotations to the pods within a pod or deployment. {} NA
/db-replication-svc/dbreplsvcdeployments[1]/schedulertimer The timer (in seconds) used for managing replication channels and heartbeat exchanges with remote sites. 5s This time performs the following functions:
  • Setup Replication Channels: Periodically initiates and configures replication channels.
  • Monitor Replication Channels: Continuously checks the status and health of active replication channels.
  • Exchange Heartbeats: Sends and receives heartbeat signals with remote sites to maintain connectivity and detect failures.

Note: Consult with the engineering before making any change to this timer.

/db-replication-svc/dbreplsvcdeployments[1]/log/level Specifies the logging level for the replications service deployment. INFO NA
/db-replication-svc/dbreplsvcdeployments[1]/service/type Specifies the type of the replication service deployment. LoadBalancer NA
/db-replication-svc/dbreplsvcdeployments[1]/service/loadBalancerIP Fixed LoadBalncer IP for <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc "" Configure the LoadBalancer IP that must be assigned to <${OCCNE_SITE_NAME}>-<${OCCNE_SECOND_MATE_SITE_NAME}>-replication-svc
/db-replication-svc/dbreplsvcdeployments[1]/service/port This parameter specifies the port number used by the replication service deployment to listen for network connections. 80 NA
/db-replication-svc/dbreplsvcdeployments[1]/service/labels Labels to organize and manage network access for replication service deployment. The labels attached to the service help identify and categorize it for integration with other systems.

app: occne_infra

cis.f5.com/as3-tenant: occne_infra

cis.f5.com/as3-app: svc_occne_infra_db_repl_svc_2

cis.f5.com/as3-pool: svc_occne_infra_db_repl_svc_pool2

If necessary, change it to appropriate values supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[1]/service/annotations Annotations used for replication service deployment. metallb.universe.tf/address-pool: oam oracle.com.cnc/app-protocols: '{"http":"TCP"},{"sftp":"TCP"}' Annotations in Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/db-replication-svc/dbreplsvcdeployments[2]/name Name of the replication service, that is, a combination of site name and third mate site name. cndbtiersitename-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc

replace <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc with the OCCNE_SITE_NAME and OCCNE_THIRD_MATE_SITE_NAME that you used.

For example: chicago-redsea-replication-svc

/db-replication-svc/dbreplsvcdeployments[2]/enabled Incase of 4 site replication third mate site exists for each site so enabled will be true. false

In case of 4 site replication, third mate site exists for each site so enabled will be true.

In case of 3 site replication, only first mate site and second mate site exists so enabled will be false.

In case of 2 site replication, only one mate site exists so enabled will be false.

/db-replication-svc/dbreplsvcdeployments[2]/multus/enable Determines if Multus IP is used to communicate to remote sites. false Set it to true if you want to use the Multus IP to communicate remote site rather than the loadbalancer IP.
/db-replication-svc/dbreplsvcdeployments[2]/multus/networkAttachmentDefinationApiName This parameter specifies the API version for the NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[2]/multus/networkAttachmentDefinationTagName The name of the NAD file which is given as pod annotation to the replication pod. "" You must provide the same NAD file name which is given as pod annotation to the replication deployment.
/db-replication-svc/dbreplsvcdeployments[2]/podDisruptionBudget/enabled Determines if PodDisruptionBudget is enabled. true Pod Disruption Budget (PDB) in Kubernetes is a policy that limits the number of concurrent disruptions that can affect a collection of pods. This ensures that the application maintains a certain level of availability even during voluntary disruptions, such as upgrades or maintenance.
/db-replication-svc/dbreplsvcdeployments[2]/podDisruptionBudget/maxUnavailable Specifies the maximum number of pods that can be unavailable during a disruption. 1 The default value "1" indicates that, at most one pod can be unavailable at any time.
/db-replication-svc/dbreplsvcdeployments[2]/podDisruptionBudget/labels Optional section to specify labels to select the pods that the PDB applies to. {} An empty dictionary {} means that PDB applies to all pods in the deployment or replica set.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/dbtierservice Specifies the name of the MySQL connectivity service. mysql-connectivity-service NA
/db-replication-svc/dbreplsvcdeployments[2]/mysql/dbtierreplservice Specifies the name of the MySQL replication service. ndbmysqldsvc NA
/db-replication-svc/dbreplsvcdeployments[2]/mysql/primaryhost The CLUSTER-IP from the ndbmysqldsvc-4 LoadBalancer service. ndbmysqld-4.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-4.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN you used. For example, ndbmysqld-4.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-4 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-4 with cnc-db-prod-ndbmysqld-4.

/db-replication-svc/dbreplsvcdeployments[2]/mysql/port This parameter specifies the port number used by the service to listen for network connections. 3306 NA
/db-replication-svc/dbreplsvcdeployments[2]/mysql/primarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-4 LoadBalancer service "" EXTERNAL-IP ip address assigned to the ndbmysqldsvc-4 service, which is used for the establishing the primary geo replication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/primarysignalhostmultusconfig/multusEnabled If set to true, then the primary replication channel is setup using the multus IP. false Set it to true if you want the primary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/primarysignalhostmultusconfig/networkAttachmentDefinationApiName Specifies the API version for NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/primarysignalhostmultusconfig/networkAttachmentDefinationTagName Specifies the NAD file name that is used to identify the multus IP from the ndbmysqld pods. This file name is then used it to set to the primary replication channel. "" If set, then cnDBTier uses the same name to identify the multus IP from the ndbmysqld pods and uses the same IP to set up the primary replication channel.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/primaryhostserverid The unique SQL server id for the primary sql node. 1004 For site 1 primary sql node, set it to 1004, for site 2 primary sql node, set it to 2004, for site 3 primary sql node, set it to 3004, for site 4 primary sql node, set it to 4004.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondaryhost The CLUSTER-IP from the ndbmysqldsvc-5 LoadBalancer service. ndbmysqld-5.ndbmysqldsvc.occne-cndbtier.svc.cluster.local Replace ndbmysqld-5.ndbmysqldsvc.<${OCCNE_NAMESPACE}>.svc.<${OCCNE_DOMAIN}> with the OCCNE_NAMESPACE and OCCNE_DOMAIN. For example, ndbmysqld-5.ndbmysqldsvc.occne-cndbtier.svc.occne1-cgbu-cne-dbtier.

If you are deploying cnDBTier with a pod prefix, then ensure that ndbmysqld pod has the prefix. That is, replace ndbmysqld-5 with actual pod name including the prefix. For example, if the value of the /global/k8sResource/container/prefix configuration is "cnc-db-prod", then replace ndbmysqld-5 with cnc-db-prod-ndbmysqld-5.

/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondarysignalhost The EXTERNAL-IP from the ndbmysqldsvc-5 LoadBalancer service. "" EXTERNAL-IP ip address assigned to the ndbmysqldsvc-5 service, which is used for the establishing the secondary georeplication channel with remote site.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondarysignalhostmultusconfig/multusEnabled If set to true then the secondary replication channel will be setup using the multus IP. false Set it to true if you want the secondary replication channel to setup with the multus IP address.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondarysignalhostmultusconfig/networkAttachmentDefinationApiName Specifies the API version for NetworkAttachmentDefinition resources used in Kubernetes. These resources define additional network interfaces for pods, allowing them to connect to multiple networks. k8s.v1.cni.cncf.io If necessary, change it to a suitable value supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondarysignalhostmultusconfig/networkAttachmentDefinationTagName Specifies the NAD file name that is used to identify the multus IP from the ndbmysqld pods. This file name is then used it to set to the secondary replication channel. "" If set, then cnDBTier uses the same name to identify the multus IP from the ndbmysqld pods and uses the same IP to set up the secondary replication channel.
/db-replication-svc/dbreplsvcdeployments[2]/mysql/secondaryhostserverid The unique SQL server id for the secondary sql node. 1005 For site 1 primary sql node, set it to 1005, for site 2 primary sql node, set it to 2005, for site 3 secondary sql node, set it to 3005, for site 4 secondary sql node, set it to 4005.
/db-replication-svc/dbreplsvcdeployments[2]/replication/localsiteip Local site replication service external IP assigned to the replication service. "" EXTERNAL-IP address assigned to the <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc in current Site.
/db-replication-svc/dbreplsvcdeployments[2]/replication/localsiteport Local site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[2]/replication/matesitename third mate site for the current site that is being installed. <${OCCNE_THIRD_MATE_SITE_NAME}>

replace <${OCCNE_THIRD_MATE_SITE_NAME}> with the OCCNE_THIRD_MATE_SITE_NAME you used.

For example: redsea

/db-replication-svc/dbreplsvcdeployments[2]/replication/remotesiteip Mate site replication service external IP for establishing georeplication. ""

If deploying cndbtier site1, use "";

If deploying cndbtier site2, use "";

If deploying cndbtier site3, use "";

If deploying cndbtier site4 use EXTERNAL-IP from site3 occne-db-replication-svc LoadBalancer service

Use the value from the
OCCNE_THIRD_MATE_REPLICATION_SVC

Environment variable.

/db-replication-svc/dbreplsvcdeployments[2]/replication/remotesiteport Mate site port for the current site that is being installed. "80" NA
/db-replication-svc/dbreplsvcdeployments[2]/labels Provide specific pod labels apart from commonlabels. {} Set the labels for db-replication apart from commonlabels like in this format. For example: app-home: cndbtier
/db-replication-svc/dbreplsvcdeployments[2]/egressannotations Parameter to modify the existing Egress annotation to match the Egress annotation supported by Kubernetes. oracle.com.cnc/egress-network: "oam" Set this parameter to the appropriate annotation that is supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[2]/podAnnotations This parameter allows to add metadata annotations to the pods within a pod or deployment. {} NA
/db-replication-svc/dbreplsvcdeployments[2]/schedulertimer The timer (in seconds) used for managing replication channels and heartbeat exchanges with remote sites. 5s This time performs the following functions:
  • Setup Replication Channels: Periodically initiates and configures replication channels.
  • Monitor Replication Channels: Continuously checks the status and health of active replication channels.
  • Exchange Heartbeats: Sends and receives heartbeat signals with remote sites to maintain connectivity and detect failures.

Note: Consult with the engineering before making any change to this timer.

/db-replication-svc/dbreplsvcdeployments[2]/log/level Specifies the logging level for the replications service deployment. INFO NA
/db-replication-svc/dbreplsvcdeployments[2]/service/type Specifies the type of the replication service deployment. LoadBalancer NA
/db-replication-svc/dbreplsvcdeployments[2]/service/loadBalancerIP Fixed LoadBalncer IP for <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc "" Configure the LoadBalancer IP that must be assigned to <${OCCNE_SITE_NAME}>-<${OCCNE_THIRD_MATE_SITE_NAME}>-replication-svc
/db-replication-svc/dbreplsvcdeployments[2]/service/port This parameter specifies the port number used by the replication service deployment to listen for network connections. 80 NA
/db-replication-svc/dbreplsvcdeployments[2]/service/labels Labels to organize and manage network access for replication service deployment. The labels attached to the service help identify and categorize it for integration with other systems.

app: occne_infra

cis.f5.com/as3-tenant: occne_infra

cis.f5.com/as3-app: svc_occne_infra_db_repl_svc_3

cis.f5.com/as3-pool: svc_occne_infra_db_repl_svc_pool3

If necessary, change it to appropriate values supported by Kubernetes.
/db-replication-svc/dbreplsvcdeployments[2]/service/annotations Annotations used for replication service deployment. metallb.universe.tf/address-pool: oam oracle.com.cnc/app-protocols: '{"http":"TCP"},{"sftp":"TCP"}' Annotations in Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/db-replication-svc/startupProbe/initialDelaySeconds Kubernetes pod configuration that specifies the number of seconds to wait before initiating the first health check for a container. 60 NA
/db-replication-svc/startupProbe/successThreshold Kubernetes pod configuration that specifies the minimum number of consecutive successful health checks required for a probe to be considered successful. 1 NA
/db-replication-svc/startupProbe/failureThreshold Kubernetes pod configuration that specifies the maximum number of consecutive failed health checks before a container is considered to be failed. 30 If the container fails, the pod will be restarted.
/db-replication-svc/startupProbe/periodSeconds Kubernetes pod configuration that determines the interval (in seconds) between consecutive health checks performed on a container. 10 NA
/db-replication-svc/startupProbe/timeoutSeconds Kubernetes pod configuration that specifies the maximum amount of time (in seconds) to wait for a response from a container during a health check before considering it a failure. 1 NA
/db-replication-svc/numberofparallelbackuptransfer Number of threads created for transferring the backups of multiple data nodes in parallel. 4 Each thread transfers the backup of one data node.
/db-replication-svc/validateresourcesingeorecovery This parameter enables or disables the validation of georeplication recovery resources (that is, PVC size and CPU of the leader replication service) in the FIALED site where georeplication recovery is started. true If set to true, it triggers the validation of georeplication recovery resources in the FIALED site, where georeplication recovery is started, to ensure they are configured correctly for georeplication recovery.
/db-replication-svc/grrecoveryresources/limits/cpu The maximum limit of CPU count allocated for the replication service deployment that restores the cluster using the backup. 2 The maximum amount of CPU that Kubernetes allocates for the replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/limits/memory The maximum limit of memory size allocated for the replication service deployment that restores the cluster using the backup. 12Gi The maximum amount of memory size that Kubernetes allocates for the replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/limits/ephemeral-storage The maximum limit of ephemeral storage size allocated for the db-replication-svc pod. 1Gi Ephemeral storage Limits for each of the db-replication-svc pod.
/db-replication-svc/grrecoveryresources/requests/cpu The required CPU count allocated for the replication service deployment that restores the cluster using the backup. 2 The CPU allotment for each replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/requests/memory The required memory size allocated for the replication service deployment that restores the cluster using the backup. 12Gi The memory size allotment for each replication service deployment that restores the cluster using the backup.
/db-replication-svc/grrecoveryresources/requests/ephemeral-storage Required ephemeral storage size allocated for the db-replication-svc pod. 90Mi Ephemeral storage allotment for each db-replication-svc pod.
/db-replication-svc/resources/limits/cpu The maximum limit of CPU count allocated for the DB replication service pods. 1 The maximum amount of CPU that Kubernetes allocates for each db-replication-svc pod to use.
/db-replication-svc/resources/limits/memory The maximum memory size allocated for the DB replication service pods. 2048Mi The maximum amount of memory size that Kubernetes allocates for each db-replication-svc pod.
/db-replication-svc/resources/limits/ephemeral-storage The maximum limit of ephemeral storage size allocated for the DB replication service pods. 1Gi The ephemeral storage limit for each db-replication-svc pod.
/db-replication-svc/resources/requests/cpu The required CPU count allocated for the DB replication service pods. 1 The CPU allotment for each db-replication-svc pod.
/db-replication-svc/resources/requests/memory The required memory size allocated for the DB replication service pods. 2048Mi The memory allotment for each db-replication-svc pod.
/db-replication-svc/resources/requests/ephemeral-storage The required ephemeral storage size allocated for the DB replication service pods. 90Mi The ephemeral storage allotment for each of the db-replication-svc pod
/db-replication-svc/proxy/host The hostname or IP address of the proxy server. "" NA
/db-replication-svc/proxy/port The port number on which the proxy server listens for incoming connections. "" NA
/db-replication-svc/initcontainer/image/repository The name of the docker image of the mysql ndb client. cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively. For example: cndbtier-mysqlndb-client.
/db-replication-svc/initcontainer/image/tag Version for the docker image of mysql ndb client. 24.3.1 Change it to the actually version of the docker image respectively. For example: 24.3.1.
/db-replication-svc/initcontainer/image/pullPolicy Specifies the image pull policy for the MySQL NDB client docker image. IfNotPresent NA
/db-replication-svc/InitContainersResources/limits/cpu The maximum limit CPU count allocated for mysqlndbclient. 0.2 NA
/db-replication-svc/InitContainersResources/limits/memory The maximum memory size allocated for mysqlndbclient. 500Mi NA
/db-replication-svc/InitContainersResources/limits/ephemeral-storage The maximum limit of ephemeral storage size allocated for mysqlndbclient. 1Gi Ephemeral Storage Limits for mysqlndbclient
/db-replication-svc/InitContainersResources/requests/cpu The required CPU count allocated for mysqlndbclient. 0.2 NA
/db-replication-svc/InitContainersResources/requests/memory The required memory size allocated for mysqlndbclient. 500Mi NA
/db-replication-svc/InitContainersResources/requests/ephemeral-storage Required ephemeral storage size allocated for the mysqlndbclient. 90Mi Ephemeral Storage allotment for mysqlndbclient
/db-replication-svc/enableInitContainerForIpDiscovery Enable discovering the Loadbalancer IP address for ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n LoadBalancer services. true

Enable discovering the Loadbalancer IP address for ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n LoadBalancer services in db_replication_svc pod.

Set this value to true if the Kubernetes service type of the ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n is "LoadBalancer" then db_replication_svc pods will get the external ip's from ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n services.

Set this value to false if the Kubernetes service type of the ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n is "LoadBalancer" and the external IPs are assigned to the ndbmysqldsvc-0, ndbmysqldsvc-1, .., ndbmysqldsvc-n pod services using the external Load Balancers (For example: F5 Loadbalancer).

3.6 DB Monitor Service Parameters

The following table provides a list of database monitor service parameters.

Table 3-6 DB Monitor Service Parameters

Parameter Description Default Value Notes
/db-monitor-svc/nodeSelector Specifies the list of node selector labels that correspond to the Kubernetes nodes where the db-monitor-svc pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the monitor service pods to be scheduled on worker node with label nodetype=monitorsvc, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: monitorsvc

nodeSelector is disabled if this parameter is passed empty.

/db-monitor-svc/schedulertimer The frequency (in second) at which the monitor service must check if binlog injector thread in every replication SQL node is stalled or not. 5s The default value is 5000 milliseconds (5 seconds). This means that, for every five seconds, the DB monitor service checks every replication SQL node to see if the bin log injector thread is stalled or not.
/db-monitor-svc/binlogthreadstore/capacity The capacity upto which you want to store and track the bin log position change with respect to the bin log injector tracker. 5 The default value is 5. This means that, the previous 5 bin log position changes with respect to the bin log injector are stored. These values are used to compare whether the values are changing or not. If the value is not changing, it means bin log injector is stalled.
/db-monitor-svc/metricsFetchSchedulerTimer Specifies the interval (in seconds) at which a scheduled task or job fetches metrics. 55s NA
/db-monitor-svc/onDemandFetchApproach Indicates if on demand metrics fetch approach is enabled or disabled in monitor service. true When this parameter is set to true, the system fetches the metrics on demand. When set to false, the system fetches metrics using a cached approach with a the help of a scheduler.
/db-monitor-svc/restartSQLNodesIfBinlogThreadStalled Indicates if the SQL node is restarted when the binlog threads stall. true If this parameter is set to true, the monitor service checks if any ndbapp or ndbmysqld pod binlog thread is stalled and restarts that pod.
/db-monitor-svc/image/repository Specifies the name of the docker image of the DB monitor service. db_monitor_svc Change this value to the actual docker image path on your docker registry respectively. For example: db_monitor_svc.
/db-monitor-svc/image/tag Specifies the docker image version of the DB monitor service. 24.3.1 Change it to the version of the docker registry. For example, 24.3.1.
/db-monitor-svc/image/pullPolicy Specifies the image pull policy for DB monitor service. IfNotPresent NA
/db-monitor-svc/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/db-monitor-svc/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/db-monitor-svc/nodeAffinity/requiredDuringScheduling/affinitykeyvalues Specifies the list of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the monitor service pods must be scheduled.
For example, if you want the monitor service pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/db-monitor-svc/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/db-monitor-svc/nodeAffinity/preferredDuringScheduling/expressions Specifies the list of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where monitor service pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the monitor service pods.

For example, if you want the monitor service pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - dbtierservice
    - monitor

In this case, more preference is given to the worker nodes with label matching to node type dbtierservice or monitor as this rule has a greater value for weight.

/db-monitor-svc/labels Specifies specific pod labels apart from the common labels. {} Set the labels for db-monitor-svc in the following format: app-home: cndbtier
/db-monitor-svc/podAnnotations This parameter allows to add metadata annotations to the pods within pod or deployment. oracle.com/cnc: "true" NA
/db-monitor-svc/service/labels Specifies the labels to organize and manage network access for monitor service deployment. The labels attached to the service help identify and categorize it for integration with other systems. {} If necessary, change it to appropriate values supported by Kubernetes.
/db-monitor-svc/service/annotations Specifies the annotations used for the db-monitor-svc deployment. {} Annotations in Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/db-monitor-svc/resources/limits/cpu The maximum CPU limit allocated for a db-monitor-svc pod. 4 NA
/db-monitor-svc/resources/limits/memory The maximum memory allocated for a db-monitor-svc pod. 4Gi NA
/db-monitor-svc/resources/limits/ephemeral-storage The maximum ephemeral-storage size allocated for a db-monitor-svc pod 1Gi Ephemeral storage limits for each of the db-monitor-svc pod.
/db-monitor-svc/resources/requests/cpu The required CPU count allocated for a db-monitor-svc pod. 4 NA
/db-monitor-svc/resources/requests/memory The required memory size allocated for a db-monitor-svc pod. 4Gi NA
/db-monitor-svc/resources/requests/ephemeral-storage Specifies the required ephemeral storage size allocated for a db-monitor-svc pod. 90Mi Ephemeral storage allotment for each of the db-monitor-svc pod
/db-monitor-svc/log/level Specifies the level of logging for the db-monitor-svc deployment. WARN NA

3.7 DB Backup Manager Service Parameters

The following table provides a list of database backup manager service parameters.

Table 3-7 DB Backup Manager Service Parameters

Parameter Description Default Value Notes
/db-backup-manager-svc/nodeSelector Specifies the list of node selector labels that correspond to the Kubernetes nodes where the db-backup-manager-svc pods must be scheduled. {} Use this parameter to configure the worker node labels if you want to use the node selector labels for scheduling the pods. For example, if you want the db-backup-manager-svc pods to be scheduled on worker node with label nodetype=backupmgrsvc, then nodeSelector must be configured as follows:
nodeSelector: 
- nodetype: backupmgrsvc

nodeSelector is disabled if this parameter is passed empty.

/db-backup-manager-svc/scheduler/cronjobExpression The scheduled time at which the backup service must be run. 0 0 */7 * * By default, the backup service is run once in every seven days. Configure this parameter as per your requirement. For example, if you want to run the backup service once in every two days, then the value must be set to "0 0 */2 * *".
/db-backup-manager-svc/securityContext Determines the security settings for a pod or a container, such as user and group IDs, capabilities, and other security-related configurations. {} NA
/db-backup-manager-svc/deletePurgedRecords/enabled Indicates if old purged backup record entries are deleted from backup_info.DBTIER_BACKUP_INFO. true Set this parameter to false if you don't want to delete the database entries of purged backups.

Set this parameter to true if you want to delete the database entries of purged backups older than the number of days specified in /db-backup-manager-svc/deletePurgedRecords/retainPurgedBackupForDays.

/db-backup-manager-svc/deletePurgedRecords/schedulerInterval Defines the scheduler interval (in days) in which the scheduler checks if there are purged entries to be deleted from backup_info.DBTIER_BACKUP_INFO table. 1 Set this parameter to the interval (in days) in which you want to run the scheduler to check if there are purged entries to be deleted.

For example, setting this parameter to 2 indicates that the scheduler is run every two days to check if there are purged entries to be deleted.

/db-backup-manager-svc/deletePurgedRecords/retainPurgedBackupForDays Indicates the number of days for which the purged backup records are retained. 30 Set this parameter to the number of days for which you want to retain the purged backup entries in database tables.

By default, the database tables retain the purged backup entries for 30 days. The entries that are older than 30 days are deleted if /db-backup-manager-svc/deletePurgedRecords/enabled is set to true.

/db-backup-manager-svc/executor_status_verify_retry/count Specifies the total number of retry attempts the backup manager service makes to verify the status of the executor service. 900 NA
/db-backup-manager-svc/executor_status_verify_retry/gap Specifies the time interval, in seconds, between each retry attempt. 10 NA
/db-backup-manager-svc/nodeAffinity/enable Indicates if node affinity is enabled. false Set this parameter to true if you want to enable and use node affinity rules for scheduling the pods.
/db-backup-manager-svc/nodeAffinity/requiredDuringScheduling/enable Indicates if hard node affinity rules are enabled. true When this parameter is set to true, pods are scheduled only when the node affinity rules are met and are not scheduled if otherwise.
/db-backup-manager-svc/nodeAffinity/requiredDuringScheduling/affinitykeyvalues Specifies the list of node affinity rules.
- keyname: custom_key
   keyvalues: 
     - customvalue1
     - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where the backup manager service pods must be scheduled.
For example, if you want the backup manager service pods to be scheduled on the worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then affinitykeyvalues must be configured as follows:
affinitykeyvalues:
      - keyname: topology.kubernetes.io/zone
        keyvalues: 
        - antarctica-east1
        - antarctica-east2
/db-backup-manager-svc/nodeAffinity/preferredDuringScheduling/enable Indicates if soft node affinity rules are enabled. false When this parameter is set to true, the scheduler tries to schedule the pods on the worker nodes that meet the affinity rules. But, if none of the worker nodes meets the affinity rules, the pods gets scheduled on other worker nodes.

Note: You can enable either preferredDuringScheduling or requiredDuringScheduling at a time.

/db-backup-manager-svc/nodeAffinity/preferredDuringScheduling/expressions Specifies the list of node affinity rules.
- weight: 1
  affinitykeyvalues:
  - keyname: custom_key
    keyvalues: 
    - customvalue1
    - customvalue2
Configure keyname and keyvalues with the key and value of the label of the worker node where backup manager service pods are preferred to be scheduled.

The value of weight can range between 1 and 100. The higher the value of weight, the higher the preference of the rule while scheduling the pods.

For example, if you want the backup manager service pods to be scheduled on worker node with label topology.kubernetes.io/zone=antarctica-east1 or topology.kubernetes.io/zone=antarctica-east2, then configure the affinitykeyvalues as follows:
- weight: 1
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
Refer to the following example if you want to configure multiple soft node affinity rules with different weights:
- weight: 30
  affinitykeyvalues:
  - keyname: topology.kubernetes.io/zone
    keyvalues: 
    - antarctica-east1
    - antarctica-east2
 - weight: 80
  affinitykeyvalues:
  - keyname: nodetype:
    keyvalues: 
    - dbtierservice
    - backup

In this case, more preference is given to the worker nodes with label matching to node type dbtierservice or backup as this rule has a greater value for weight.

/db-backup-manager-svc/pod/annotations Specifies the annotations used for DB backup manager service pods. {} Annotations in Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/db-backup-manager-svc/pod/labels Specifies specific pod labels apart from the common labels. {} Set the labels for db-backup-manager-svc in the following format: app-home: cndbtier
/db-backup-manager-svc/image/repository Specifies the docker image name of the DB backup manager service. db_backup_manager_svc Change it to the docker image path on your docker registry. For example, db_backup_manager_svc.
/db-backup-manager-svc/image/tag Specifies the docker image version of the DB backup service. 24.3.1 Change it to the version of the docker registry. For example, 24.3.1.
/db-backup-manager-svc/image/pullPolicy Specifies the image pull policy for DB backup manager service. IfNotPresent NA
/db-backup-manager-svc/log/level Specifies the level of logging for the backup manager service deployment. INFO NA
/db-backup-manager-svc/resources/limits/cpu The maximum CPU limit allocated for the DB backup manager service pod. 100m NA
/db-backup-manager-svc/resources/limits/memory The maximum memory allocated for the DB backup manager service pod. 128Mi NA
/db-backup-manager-svc/resources/limits/ephemeral-storage The maximum limit of the ephemeral storage size allocated for the DB backup manager service pod. 1Gi Ephemeral storage Limits for each of the db-backup-manager-svc pod
/db-backup-manager-svc/resources/requests/cpu The required CPU count allocated for the DB backup manager service pod. 128Mi NA
/db-backup-manager-svc/resources/requests/memory The required memory count allocated for the DB backup manager service pod. 90Mi NA
/db-backup-manager-svc/resources/requests/ephemeral-storage The required ephemeral storage size allocated for the DB backup manager service pod. 90Mi Ephemeral storage allotment for each of the db-backup-manager-svc pod
/db-backup-manager-svc/priorityClassName Assigns a priority class to a Pod. This priority class determines the scheduling priority of the Pod, helping the Kubernetes scheduler to decide the order in which Pods must be scheduled when resources are limited. "" NA
/db-backup-manager-svc/service/annotations Specifies the annotations used for the DB backup manager service deployment. {} Annotations in Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/db-backup-manager-svc/service/labels Specifies the labels to organize and manage network access for monitor service deployment. The labels attached to the service help identify and categorize it for integration with other systems. {} If necessary, change it to appropriate values supported by Kubernetes.

3.8 Post Install Job Parameters

The following table provides a list of parameters to be configured post installation.

Table 3-8 Post Install Job Parameters

Parameter Description Default Value Notes
/postInstallJob/image/repository Specifies the docker image name of the postInstallJob service. cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively.

For example: cndbtier-mysqlndb-client

/postInstallJob/image/tag Specifies the docker image version of the postInstallJob service. 24.3.1 Change it to the actually version of the docker image respectively.

For example: 24.3.1

/postInstallJob/image/pullPolicy Specifies the pull policy for the MySQL NDB client docker image of the postInstallJob pod. IfNotPresent NA
/postInstallJob/resources/limits/cpu The maximum CPU limit allocated for the MySQL NDB client of the postInstallJob pod. 0.1 NA
/postInstallJob/resources/limits/memory The maximum memory limit allocated for the MySQL NDB client of the postInstallJob pod. 256Mi NA
/postInstallJob/resources/limits/ephemeral-storage The maximum limit ephemeral storage size allocated for the postInstallJob pod. 1Gi Ephemeral storage Limits for each of the postInstallJob pod.
/postInstallJob/resources/requests/cpu The required CPU count allocated for the postInstallJob pod. 0.1 NA
/postInstallJob/resources/requests/memory The required memory count allocated for the postInstallJob pod. 256Mi NA
/postInstallJob/resources/requests/ephemeral-storage The required ephemeral storage size allocated for the postInstallJob pod. 90Mi Ephemeral storage allotment for each of the postInstallJob pod.

3.9 Preupgrade Job Parameters

The following table provides a list of parameters to be configured for preupgrade.

Table 3-9 Preupgrade Job Parameters

Parameter Description Default Value Notes
/preUpgradeJob/image/repository Name of the docker image of preUpgradeJob service. cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively.

For example: cndbtier-mysqlndb-client

/preUpgradeJob/image/tag Version for the docker image of preUpgradeJob service. 24.3.1 Change it to the actually version of the docker image respectively.

For example: 24.3.1

.
/preUpgradeJob/image/pullPolicy Specifies the image pull policy for the MySQL NDB client docker image of the preUpgradeJob pod. IfNotPresent NA
/preUpgradeJob/resources/limits/cpu The maximum CPU limit allocated for the MySQL NDB client of the preUpgradeJob pod. 0.1 NA
/preUpgradeJob/resources/limits/memory The maximum memory limit allocated for the MySQL NDB client of the preUpgradeJob pod. 256Mi NA
/preUpgradeJob/resources/limits/ephemeral-storage Max limit ephemeral-storage size allocated for the preUpgradeJob pod. 1Gi Ephemeral storage Limits for each of the postUpgradeJob pod
/preUpgradeJob/resources/requests/cpu The required CPU count allocated for the preUpgradeJob pod. 0.1 NA
/preUpgradeJob/resources/requests/memory The required memory allocated for the preUpgradeJob pod. 256Mi NA
/preUpgradeJob/resources/requests/ephemeral-storage Required Ephemeral storage size allocated for the preUpgradeJob pod. 90Mi Ephemeral storage allotment for each of the postUpgradeJob pod

3.10 Prerollback Job Parameters

The following table provides the list of parameters to be configured before performing a rollback.

Table 3-10 Prerollback Job Parameters

Parameter Description Default Value Notes
/preRollbackJob/image/repository Name of the docker image of the preRollbackJob service. cndbtier-mysqlndb-client Change this parameter to the actually docker image name on your docker registry.

For example: cndbtier-mysqlndb-client

/preRollbackJob/image/tag Version for the docker image of the preRollbackJob service. 24.3.1 Change it to the actually version of the docker image.

For example: 24.3.1

/preRollbackJob/image/pullPolicy Image pull policy for MySQL NDB client docker image of preRollbackJob pod. IfNotPresent NA
/preRollbackJob/resources/limits/cpu Maximum amount of CPU that Kubernetes allocates for the preRollbackJob pod. 0.1 NA
/preRollbackJob/resources/limits/memory Maximum memory size that Kubernetes allocates for the preRollbackJob pod. 256Mi NA
/preRollbackJob/resources/limits/ephemeral-storage Maximum limit of ephemeral-storage size allocated for the preRollbackJob pods. 1Gi Ephemeral storage limit for each of the preRollbackJob pod.
/preRollbackJob/resources/requests/cpu Required CPU count allocated for the preRollbackJob pod. 0.1 NA
/preRollbackJob/resources/requests/memory Required memory size allocated for the preRollbackJob pod. 256Mi NA
/preRollbackJob/resources/requests/ephemeral-storage Required ephemeral storage size allocated for the preRollbackJob pods. 90Mi Ephemeral storage allotment for each of the preRollbackJob pod

3.11 Post Upgrade Job Parameters

The following table provides a list of parameters to be configured for post upgrade.

Table 3-11 Post Upgrade Job Parameters

Parameter Description Default Value Notes
/postUpgradeJob/image/repository Specifies the docker image name of the postUpgradeJob service. cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively.

For example: cndbtier-mysqlndb-client

/postUpgradeJob/image/tag Specifies the docker image version of the postUpgradeJob service. 24.3.1 Change it to the actually version of the docker image respectively.

For example: 24.3.1

/postUpgradeJob/image/pullPolicy Specifies the image pull policy for the postUpgradeJob pod. IfNotPresent NA
/postUpgradeJob/resources/limits/cpu The maximum amount of CPU that Kubernetes allocates for the postUpgradeJob pod. 0.1 NA
/postUpgradeJob/resources/limits/memory The maximum memory size that Kubernetes allocates for the postUpgradeJob pod. 256Mi NA
/postUpgradeJob/resources/limits/ephemeral-storage The maximum limit of ephemeral storage size allocated for the postUpgradeJob pod. 1Gi Ephemeral storage Limits for each of the postUpgradeJob pod.
/postUpgradeJob/resources/requests/cpu The required CPU count allocated for the postUpgradeJob pod. 0.1 NA
/postUpgradeJob/resources/requests/memory The required memory size allocated for the postUpgradeJob pod. 256Mi NA
/postUpgradeJob/resources/requests/ephemeral-storage The required ephemeral storage size allocated for the postUpgradeJob pod. 90Mi Ephemeral storage allotment for each of the postUpgradeJob pod.

3.12 Post Rollback Job Parameters

The following table provides a list of parameters to be configured for post rollback.

Table 3-12 Post Rollback Job Parameters

Parameter Description Default Value Notes
/postRollbackJob/image/repository Specifies the docker image name of the postRollbackJob service. cndbtier-mysqlndb-client Change it to the actually docker image name on your docker registry respectively.

For example: cndbtier-mysqlndb-client

/postRollbackJob/image/tag Specifies the docker image version of the postRollbackJob service. 24.3.1 Change it to the actually version of the docker image respectively.

For example: 24.3.1

/postRollbackJob/image/pullPolicy Specifies the image pull policy for the postRollbackJob pod. IfNotPresent NA
/postRollbackJob/resources/limits/cpu The maximum amount of CPU that Kubernetes allocates for the postRollbackJob pod. 0.1 NA
/postRollbackJob/resources/limits/memory The maximum memory size that Kubernetes allocates for the postRollbackJob pod. 256Mi NA
/postRollbackJob/resources/limits/ephemeral-storage Max limit ephemeral storage size allocated for the postRollbackJob pod. 1Gi Ephemeral storage Limits for each of the postRollbackJob pod.
/postRollbackJob/resources/requests/cpu The required CPU count allocated for the postRollbackJob pod. 0.1 NA
/postRollbackJob/resources/requests/memory The required memory size allocated for the postRollbackJob pod. 256Mi NA
/postRollbackJob/resources/requests/ephemeral-storage The required ephemeral storage size allocated for the postRollbackJob pod. 90Mi Ephemeral storage allotment for each of the postRollbackJob pod.

3.13 Helm Test Parameters

The following table provides a list of Helm test parameters.

Table 3-13 Helm Test Parameters

Parameter Description Default Value Notes
/test/image/repository Specifies the docker image name of the management test node connection pod. cndbtier-mysqlndb-client Change it to the actual docker image name on your docker registry respectively.

For example: cndbtier-mysqlndb-client

/test/image/tag Specifies the docker image version of the management test node connection pod. 24.3.1 Change it to the actual version of the docker image.

For example: 24.3.1.

/test/image/pullPolicy Specifies the image pull policy for the management test node connection pod. IfNotPresent NA
/test/annotations Specifies the annotations used for management test node connection pod. - sidecar.istio.io/inject: "true"

Annotations in a Kubernetes are the key-value pairs attached to objects such as pods, deployments, or services. They are used to store additional information about objects for tools and integrations to utilize.

If necessary, change it to appropriate values supported by Kubernetes.

/test/statusCheck/replication/enable Indicates the helm test for the DB replication service is enabled true If you want to perform Helm test on db-replication-svc to check if SQL database is accessible, set it to true.
/test/statusCheck/monitor/enable Indicates the helm test for the DB monitor service. true If you want to perform Helm test on db-monitor-svc to check if db-monitor is healthy, set it to true.
/test/resources/limits/cpu The maximum amount of CPU that Kubernetes allocates for the management test node connection pod. 0.1 NA
/test/resources/limits/memory The maximum memory size that Kubernetes allocates for the management test node connection pod. 256Mi NA
/test/resources/limits/ephemeral-storage The maximum limit ephemeral storage size allocated for the management test node. 1Gi Ephemeral storage Limits for each management test node connection.
/test/resources/requests/cpu The required CPU count allocated for management test node connection pod. 0.1 NA
/test/resources/requests/memory The required memory size allocated for management test node connection pod. 256Mi NA
/test/resources/requests/ephemeral-storage Indicates the required ephemeral storage size allocated for the management test node. 90Mi Ephemeral storage allotment for each of the management test node connection.