4 CAPIF Installation and Upgrade
This chapter describes how to install, customize, upgrade, and uninstall Common API Framework (CAPIF) on Oracle Communications Cloud Native Environment (CNE).
4.1 Installing CAPIF
Note:
CAPIF supports the independent deployment with its microservices. For information about the prerequisites to install CAPIF, see Prerequisites chapter and to know how to upgrade CAPIF, see Upgrading CAPIF section.
4.1.1 Preinstallation
Note:
CAPIF supports fresh installation, and it can also be upgraded from 22.3.x. For more information on how to upgrade CAPIF, see Upgrading CAPIF section.4.1.1.1 Verifying and Creating CAPIF Namespace
Note:
This is a mandatory procedure, run this before proceeding further with the installation procedure. The namespace created or verified in this procedure is an input for the next procedures.- Run the following command to verify if the required namespace already exists in the
system:
kubectl get namespaces
In the output of the above command, if the namespace exists, continue with Configuring Database, Creating Users, and Granting Permissions.
- If the required namespace is unavailable, create the namespace using
the following
command:
For example:kubectl create namespace <required namespace>
$ kubectl create namespace occapif-namespace
Note:
This is an optional step. Skip this step if the required namespace already exists.
Naming Convention for Namespaces
- start and end with an alphanumeric character.
- contain 63 characters or less.
- contain only alphanumeric characters or '-'.
Note:
It is recommended to avoid using prefixkube-
when creating
namespace as this prefix is reserved for Kubernetes system namespaces.
4.1.1.2 Creating Service Account, Role, and RoleBinding
This section describes the procedure to create a service account, role, and role binding resources. The Secret(s) can be under same namespace where CAPIF is getting deployed (recommended), or you can select to use different namespaces for different secret(s). If all the secret(s) are under the same namespace as CAPIF, then you can bind the Kubernetes Role with the given ServiceAccount. Otherwise, it is required to bind the ClusterRole with the given ServiceAccount.
- Create an CAPIF resource
file:
vi <occapif-resource-file>
For example:vi occapif-resource-template.yaml
- Update the
occapif-resource-template.yaml
with release specific information:Note:
Update <Helm-release> and <namespace> with its respective CAPIF namespace and CAPIF helm release name.## Sample template start# apiVersion: v1 kind: ServiceAccount metadata: name: <helm-release>-occapif-serviceaccount namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <helm-release>-occapif-role namespace: <namespace> rules: - apiGroups: - "" # "" indicates the core API group resources: - services - configmaps - pods - secrets - endpoints verbs: - get - watch - list --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: <helm-release>-occapif-rolebinding namespace: <namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: <helm-release>-occapif-role subjects: - kind: ServiceAccount name: default namespace: <namespace> ## Sample template end#
- Run the following command to create serviceaccount, role and role
binding:
kubectl -n <occapif-namespace> create -f occapif-resource-template.yaml
For example:kubectl -n occapif create -f occapif-resource-template.yaml
- Update the
serviceAccountName
parameter in theoc-capif-24.2.2-custom-values.yaml
file with the value updated inname
field underkind: ServiceAccount
. For more information aboutserviceAccountName
parameter, see the Global Parameters section.
Note:
The service account name configured in this section must be used as the value for theserviceAccountName
parameter during the customization using the CAPIF Custom Values YAML file. For more
information about the parameter, see Global Parameters.
4.1.1.3 Configuring Database, Creating Users, and Granting Permissions
This section explains how database administrators can create users and database in a single and multisite deployment.
CAPIF microservices use MySQL database to store the configuration and run time data.
CAPIF requires the database administrator to create a user in MySQL database and provide the necessary permissions to access the databases. Before installing CAPIF, it is required to create the MySQL user and databases.
Note:
Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is already up and replication channels are enabled. While performing a fresh installation, if CAPIF release is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling CAPIF.CAPIF Users
There are two types of CAPIF database users with a different set of permissions:
- CAPIF privileged user: This user has a complete set of permissions. This user can create or delete the database and perform create, alter, or drop operations on the database tables for performing installation, upgrade, rollback, and delete operations.
- CAPIF application user: This user has a limited set of permissions and is used by CAPIF application during service operations handling. This user can insert, update, get, and remove the records. This user cannot create, alter, and drop the database and the tables.
4.1.1.3.1 Single Site
Note:
New MySQL users along with their privileges must be added manually in each SQL node of the cnDBtier namespace, for CAPIF site.- Log in to the machine where SSH keys are stored. The machine must have permission to access the SQL nodes of NDB cluster.
- Connect to the SQL nodes.
- Log in to the database as a root user.
- Create the CAPIF Release
database:
CREATE DATABASE <capif_release_database_name>;
Example
CREATE DATABASE occapif_releaseDb;
Note:
In case of georedundant deployment, each CAPIF site must have different Release database names. - Create the CAPIF Service
database:
CREATE DATABASE <capif_service_database_name>;
Example
CREATE DATABASE occapif_db;
Note:
In case of georedundant deployment, each CAPIF site must have different Service database names. - Create the CAPIF privileged user and grant permissions.
- Run the following command to create privileged
user:
CREATE USER '<capif privileged username>'@'%' IDENTIFIED BY '<capif privileged user password>';
Note:
If MySQL is 8.0, run the following command to create a user:CREATE USER IF NOT EXISTS '<capif privileged username>@'%' IDENTIFIED WITH mysql_native_password BY '<capif privileged user password>'
where:
<capif privileged username> is the username and <capif privileged user password> is the password for MySQL privileged user.
- Run the following command to grant the necessary
permissions to the privileged
user:
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, NDB_STORED_USER , REFERENCES ON *.* TO '<capif privileged username>'@'%'; FLUSH PRIVILEGES;
Example
In the following example "occapifprivilegedusr" is used as username and "occapifprivilegedpasswd" is used as the password. All the permissions are granted to the privileged user, that is, occapifprivilegedusr.CREATE USER 'occapifprivilegedusr'@'%' IDENTIFIED BY 'occapifprivilegedpasswd';
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, NDB_STORED_USER , REFERENCES ON *.* TO 'occapifprivilegedusr'@'%'; FLUSH PRIVILEGES;
- Run the following command to create privileged
user:
- Create the CAPIF application user and grant permissions.
- Run the following command to create application
user:
CREATE USER '<capif application username>'@'%' IDENTIFIED BY '<capif application user password>';
where:
username is the username and password is the password for MySQL database user.
- Run the following command to grant the necessary permissions
to the privileged
user:
GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, EXECUTE, REFERENCES, NDB_STORED_USER ON *.* TO '<capif application username>'@'%';
In the following example, "occapifusr" is used as username and "occapifpasswd" is used as its password. All the necessary permissions are granted to the application user, that is, occapifusr. Here, default database names of microservices are used.
CREATE USER 'occapifusr'@'%' IDENTIFIED BY 'occapifpasswd';
GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, EXECUTE, REFERENCES, NDB_STORED_USER ON *.* TO 'occapifusr'@'%';
Note:
The database name is specified in the dbName parameter for CAPIF services in theoc-capif-24.2.2-custom-values.yaml
file. - Run the following command to create application
user:
- To confirm that the privileged or application user has all the
permissions, run the following
command:
show grants for username;
where, username is the privileged or application user's username.
Example
show grants for occapifprivilegedusr;
show grants for occapifusr;
- Exit from database and log out from MySQL node.
4.1.1.3.2 Multisite
Note:
- Perform the steps in Single Site for creating database.
- NEF supports only 2-site.
- For further information, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
For information on the Database Configurations parameters, refer to Global Parameters section.
4.1.1.4 Configuring Kubernetes Secret for Accessing CAPIF Database
This section explains how to configure Kubernetes secrets for accessing CAPIF database.
4.1.1.4.1 Creating and Updating Secret for Accessing CAPIF Privileged Database User
This section explains how to create and update Kubernetes secret for Privileged User to access the database.
- Run the following command to create kubernetes
secret:
$ kubectl create secret generic <privileged user secret name> --from-literal=dbUsername=<CAPIF Privileged User Name> --from-literal=dbPassword=<Password for CAPIF Privileged User> --from-literal=mysql-username=<CAPIF Privileged User Name> --from-literal=mysql-password=<Password for CAPIF Privileged User> -n <Namespace of CAPIF deployment>
Note:
Note down the command used during the creation of kubernetes secret. This command is used for updating the secrets in future.For example:$ kubectl create secret generic privilegeduser-secret --from-literal=dbUsername=occapifprivilegedusr --from-literal=dbPassword=occapifprivilegedpasswd --from-literal=mysql-username=occapifprivilegedusr --from-literal=mysql-password=occapifprivilegedpasswd -n occapif_namespace
- Verify the secret creation with the following
command:
$ kubectl describe secret <privileged user secret name> -n <Namespace of occapif deployment>
For example:$ kubectl describe secret privilegeduser-secret -n occapif_namespace
4.1.1.4.2 Creating and Updating Secret for Application Database User
This section explains how to create and update Kubernetes secret for application user to access the database.
- Run the following command to create Kubernetes
secret:
$ kubectl create secret generic <appuser-secret name> --from-literal=dbUsername=<CAPIF APPLICATION User Name> --from-literal=dbPassword=<Password for CAPIF APPLICATION User> --from-literal=mysql-username=<CAPIF APPLICATION User Name> --from-literal=mysql-password=<Password for CAPIF APPLICATION User> -n <Namespace of CAPIF deployment>
Note:
Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.For example:
$ kubectl create secret generic appuser-secret --from-literal=dbUsername=occapifusr --from-literal=dbPassword=occapifpasswd --from-literal=mysql-username=occapifusr --from-literal=mysql-password=occapifpasswd -n occapif_namespace
- Verify the secret creation with the following
command:
$ kubectl describe secret <appuser-secret name> -n <Namespace of CAPIF deployment>
For example:
$ kubectl describe secret appuser-secret -n occapif_namespace
4.1.1.4.3 Creating and Updating Secret for Storing Security Certificates for CAPIF
- Run the following command to create Kubernetes secret:
$ kubectl create secret generic <capif secret name> --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --fromfile=key.txt --from-file=cert.cer --from-file=caroot.cer -n<namespace>
where,
<secret name>
can beext-capif-secret
ornetwork-service-secret
, as requiredrsa_private_key_pkcs1.pem is rsa private key
trust.txt is trust store password
key.txt is key store password
caroot.cer is the cer chain for trust store
cert.cer is signed server certificate
4.1.1.5 Configuring Secrets for Enabling HTTPS
Note:
The passwords for TrustStore and KeyStore are stored in respective password files as mentioned in this section.- ECDSA private key and CA signed certificate of CAPIF, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of CAPIF, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
Note:
It is at the discretion of user to create the private keys and certificates, and it is not in the scope for NEF. This section lists only samples to create KeyPairs and certificates.Update Secrets
This section explains how to update the secret with updated details.
- Copy the exact command used in above section during creation of secret.
- Update the same command with string "--dry-run -o yaml" and
"kubectl replace -f - -n <Namespace of CAPIF deployment>".
Example of the command syntax:
$ kubectl create secret generic <secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of capif deployment> | kubectl replace -f - -n <Namespace of capif deployment>
Example:
The names used below are same as provided inoc-capif-24.2.2-custom-values.yaml
in CAPIF deployment:$ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n occapif | kubectl replace -f - -n occapif
- Run the updated command.
After successful secret update, the following message is displayed:
secret/<ocegress-secret> replaced
4.1.1.6 Configuring Secrets to Enable Access Token
This section explains how to configure a secret for enabling access token.
4.1.1.6.1 Generating Private Keys and Certificates
Create Private Keys and Certificates
- Generate RSA private key by running
the following command:
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout rsa_private_key -out rsa_certificate.crt
-
Convert the private key to
.pem
format with the following command:openssl rsa -in rsa_private_key -outform PEM -out rsa_private_key_pkcs1.pem
- Generate cert out of private key, by executing:
openssl req -new -key rsa_private_key -out tmp.csr -config ssl.conf
Note:
Thessl.conf
can be used to configure default entries along with SAN details for your cert.The following snippet shows a sample of the
ssl.conf
syntax:#ssl.conf [ req ] default_bits = 4096 distinguished_name = req_distinguished_name req_extensions = req_ext [ req_distinguished_name ] countryName = Country Name (2 letter code) countryName_default = IN stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = Karnataka localityName = Locality Name (eg, city) localityName_default = Bangalore organizationName = Organization Name (eg, company) organizationName_default = Oracle commonName = Common Name (e.g. server FQDN or YOUR name) commonName_max = 64 commonName_default = localhost [ req_ext ] subjectAltName = @alt_names [alt_names] IP = 127.0.0.1 DNS.1 = localhost
- Create Root CA by running the following command:
openssl req -new -keyout cakey.pem -out careq.pem openssl x509 -signkey cakey.pem -req -days 3650 -in careq.pem -out caroot.cer -extensions v3_ca echo 1234 > serial.txt
- Sign server cert with root CA private key by
running the following command:
openssl x509 -CA caroot.cer -CAkey cakey.pem -CAserial serial.txt -req -in tmp.csr -out tmp.cer -days 365 -extfile ssl.conf -extensions req_ext
Note:
Thessl.conf
file must be reused, as SAN contents is not packaged when signing.
4.1.1.7 Configuring Network Policies for CAPIF
Perform the following installation and upgrade procedures to deploy network policies for CAPIF.
Note:
- Ports mentioned in policies are container ports and not the exposed service ports.
- Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
- If you are using ATS suite along with network policies, it is required to install the NEF, CAPIF, and ATS in the same namespace.
- If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
- If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.
- While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
Configuring Network Policies
Following are the various operations that can be performed for network policies:
4.1.1.7.1 Installing Network Policy
Prerequisite
Note:
For a fresh installation, it is recommended to install Network Policies before installing NEF. However, if NEF is already installed, you can still install the Network Policies.To install network policy
- Open the
occapif-network-policy-custom-values-24.2.2.yaml
file provided in the release package zip file. For downloading the file, see Installation Package Download. - The custom values are provided with the default security policies.
If required, update the
occapif-network-policy-custom-values-24.2.2.yaml
file as described in the Configurable Parameters of CAPIF section. - Run the following command to install the network
policy:
Sample command:helm install <helm-release-name> occapif-network-policy/ -n <namespace> -f <custom-value-file>
Where,helm install occapif-network-policy occapif-network-policy/ -n occapif -f occapif-network-policy-custom-values-24.2.2.yaml
helm-release-name
: occapif-network-policy Helm release namecustom-value-file
: occapif-network-policy custom value filenamespace
: namespace must be the OCCAPIF's namespace
4.1.1.7.2 Upgrading Network Policy
- Modify the
occapif-network-policy-custom-values-24.2.2.yaml
file to add new network policies or update the existing policies. - Run the following command to upgrade the network
policy:
Sample command:helm upgrade <helm-release-name> occapif-network-policy/ -n <namespace> -f <custom-value-file>
Where,helm upgrade occapif-network-policy occapif-network-policy/ -n occapif -f occapif-network-policy-custom-values-24.2.2.yaml
helm-release-name
: occapif-network-policy helm release namecustom-value-file
: occapif-network-policy custom value filenamespace
: namespace must be the CAPIF's namespace
4.1.1.7.3 Verifying Network Policies
kubectl get <helm-release-name> -n <namespace>
kubectl get occapif-network-policy -n occapif
helm-release-name
: occapif-network-policy Helm release name.namespace
: CAPIF namespace.
4.1.1.7.4 Uninstalling Network Policy
helm uninstall <helm-release-name> -n <namespace>
Sample
command:helm uninstall occapif-network-policy -n occapif
4.1.1.7.5 Configuration Parameters for Network Policies
Table 4-1 Configuration Parameters for Network Policy
Parameter | Description | Details |
---|---|---|
networkPolicies | The networkPolicies parameter is of array type. Each element of this array must have standard object's metadata and specification of the desired behavior for this NetworkPolicy. | This is an optional parameter.
The network policy Helm chart creates policies for each entry. Example:
Note: Specify the policies compatible with apiversion networking.k8s.io/v1. |
For more information about this functionality, see Network Policies in the Oracle Communications Cloud Native Core, Network Exposure Function User Guide.
4.1.2 Installation Tasks
Note:
Before installing CAPIF, you must complete Prerequisites and Preinstallation Tasks. In a georedundant deployment, perform the steps explained in this section on all georedundant sites.4.1.2.1 Pushing the Images to Customer Docker Registry
CAPIF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
Table 4-2 Docker Images for CAPIF
Service Name | Docker Image Name | Image Tag |
---|---|---|
CAPIF AF Manager | oc_capif_afmgr | 24.2.2 |
CAPIF API Manager | oc_capif_apimgr | 24.2.2 |
CAPIF Event Manager | oc_capif_eventmgr | 24.2.2 |
Configuration Update Service | configurationupdate | 24.2.15 |
Configuration INIT Service | configurationinit | 24.2.15 |
Ingress Gateway | ocingress_gateway | 24.2.15 |
Egress Gateway | ocegress_gateway | 24.2.15 |
Debug Tools Service | ocdebug-tools | 24.2.6 |
NF Test Service | nf_test | 24.2.5 |
Console Data Service | oc_nef_console_data_service | 24.2.2 |
Pushing Docker Images
Prerequisite: Download and untar the NEF Package ZIP file. For more information about downloading the package, see Installation Package Download.
To push the images to the registry:
- Unzip the release package to the location where you want to install NEF. The
package is as follows:
ocnef-pkg-24.2.2.0.0.tgz
- Untar the NEF package zip file to get NEF image tar
file:
tar -xvzf ocnef-pkg-24.2.2.0.0.tgz
The directory consists of the following:occapif-24.2.2.tgz
: CAPIF Helm Chartoccapif-24.2.2.tgz.sha256
: Checksum for Helm chart tgz fileoccapif-images-24.2.2.tar
: CAPIF Images Fileoccapif-images-24.2.2.tar.sha256
: Checksum for images tar fileoccapif-network-policy-24.2.2.tgz
: CAPIF Helm Chart for network policyoccapif-network-policy-24.2.2.tgz.sha256
Checksum for network policy Helm chart tgz file
- Run one of the following commands to load the
occapif-images-24.2.2.tar
file:docker load --input /IMAGE_PATH/occapif-images-24.2.2.tar
podman load --input /IMAGE_PATH/occapif-images-24.2.2.tar
- Run one of the following commands to verify the images are
loaded:
docker images
podman images
Note:
Verify the list of images shown in the output with the list of images shown in the above table. If the list does not match, reload the image tar files. - Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
- Run one of the following commands to push the images to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
4.1.2.2 Pushing the CAPIF Images to OCI Registry
CAPIF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
Table 4-3 Docker Images for CAPIF
Service Name | Docker Image Name | Image Tag |
---|---|---|
CAPIF AF Manager | oc_capif_afmgr | 24.2.2 |
CAPIF API Manager | oc_capif_apimgr | 24.2.2 |
CAPIF Event Manager | oc_capif_eventmgr | 24.2.2 |
Configuration Update Service | configurationupdate | 24.2.15 |
Configuration INIT Service | configurationinit | 24.2.15 |
Ingress Gateway | ocingress_gateway | 24.2.15 |
Egress Gateway | ocegress_gateway | 24.2.15 |
Debug Tools Service | ocdebug-tools | 24.2.6 |
NF Test Service | nf_test | 24.2.5 |
Pushing Docker Images
Prerequisite: Download and untar the NEF Package ZIP file. For more information about downloading the package, see Installation Package Download.
To push the images to the registry:
- Unzip the release package to the location where you want to install NEF.
The package is as follows:
ocnef-pkg-24.2.2.0.0.tgz
- Untar the NEF package zip file to get NEF image tar
file:
tar -xvzf ocnef-pkg-24.2.2.0.0.tgz
The directory consists of the following:occapif-24.2.2.tgz
: CAPIF Helm Chartoccapif-24.2.2.tgz.sha256
: Checksum for Helm chart tgz fileoccapif-images-24.2.2.tar
: CAPIF Images Fileoccapif-images-24.2.2.tar.sha256
: Checksum for images tar fileoccapif-network-policy-24.2.2.tgz
: CAPIF Helm Chart for network policyoccapif-network-policy-24.2.2.tgz.sha256
Checksum for network policy Helm chart tgz file
- Run one of the following commands to load the
occapif-images-24.2.2.tar
file:docker load --input /IMAGE_PATH/occapif-images-24.2.2.tar
podman load --input /IMAGE_PATH/occapif-images-24.2.2.tar
- Run one of the following commands to verify the images are
loaded:
docker images
podman images
Note:
Verify the list of images shown in the output with the list of images shown in the above table. If the list does not match, reload the image tar files. - Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
- Run one of the following commands to push the images to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
4.1.2.3 Installing CAPIF Package
- Run the following command to access the extracted
package:
cd occapif-<release_number>
For example:cd occapif-24.2.2.0.0
- Customize the oc-capif-24.2.2-custom-values.yaml file with the required deployment parameters. See Customizing CAPIF chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation.
- Run the following command to install
CAPIF:
helm install -f <custom-file> <release_name> <helm-chart> --namespace <release_namespace> --timeout 10m
For example:helm install -f occapif-24.2.2-custom-values-occapif.yaml occapif /home/cloud-user/occapif-24.2.2.tgz --namespace occapif
where:- helm_chart is the location where helm charts are stored.
- occapif-24.2.2.tgz is the helm chart.
- release_name is the
release name used by helm command.
Note:
- The release_name should not exceed 63 characters.
- In case of georedundant setup, it is mandatory to use unique release_name for each CAPIF instance.
- release_namespace is the deployment namespace used by helm command.
- custom-file is the name of the custom values yaml file (including location).
Note:
- You can verify the installation while running the
install command by entering this command on a separate
terminal:
watch kubectl get jobs,pods -n release_namespace
- The DB hooks start creating the CAPIF database tables,
once the
helm install
command is run.
Following are the optional parameters that can be used in thehelm install
command:- atomic: If this parameter is set, installation
process purges chart on failure. The
--wait
flag will be set automatically. - wait: If this parameter is set, installation
process will wait until all pods, PVCs, Services, and minimum number
of pods of a deployment, StatefulSet, or ReplicaSet are in a ready
state before marking the release as successful. It will wait for as
long as
--timeout
. - timeout duration: If not specified, default value
will be 300 seconds in Helm. It specifies the time to wait for any
individual Kubernetes operation (like Jobs for hooks). If the
helm install
command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value. Here, the timeout value is not for overall installation, but for automatic purge on installation failure.
Caution:
Do not exit fromhelm install
command manually. After running thehelm install
command, it takes some time to install all the services. In the meantime, you must not press Ctrl+C to come out from the command. It may lead to some anomalous behavior. - Press Ctrl+C to exit watch mode. Run the
watch
command on another terminal. Run the following command to check the status:helm status release_name -n release_namespace
4.1.3 Postinstallation Tasks
This section explains the postinstallation tasks for CAPIF.
4.1.3.1 Verifying CAPIF Installation
- Run the following
command:
helm status <helm-release> -n <namespace>
Where,
<helm-release>
is the Helm release name of CAPIF.<namespace>
is the namespace of CAPIF deployment.For example:
In the output, ifhelm status occapif -n occapif_namespace
STATUS
is showing asdeployed
, then the installation is successful.Sample output:NAME: occapif LAST DEPLOYED: Fri Sep 18 10:08:03 2020 NAMESPACE: occapif STATUS: deployed REVISION: 1
- Run the following command to verify if the pods are up and
active:
kubectl get jobs,pods -n <Namespace>
Where,
<Namespace>
is the namespace where CAPIF is deployed.For example:
In the output, thekubectl get pod -n occapif
STATUS
column of all the pods must beRunning
and theREADY
column of all the pods must ben/n
, where n is the number of containers in the pod. - Run the following command to verify if the services are deployed and
active:
kubectl get services -n <Namespace>
For example:kubectl get services -n occapif_namespace
Note:
If the installation is unsuccessful or theSTATUS
of all the pods is not in the
Running
state, perform the troubleshooting steps provided in
the Oracle Communications Cloud Native Core, Network Exposure
Function Troubleshooting Guide.
4.1.3.2 Performing Helm Test
This section describes how to perform sanity check for CAPIF installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.
Note:
Helm Test expects all of the pods of given microservice to be inREADY
state for a successful result. However, the NRF Client
Management microservice comes with Active/Standby mode for the multi-pod support in
the current release. When the multi-pod support for NRF Client Management service is
enabled, you may ignore if the Helm Test for NRF-Client-Management pod
fails.
- Complete the Helm test configurations under the "Helm Test Global
Parameters" section of the
oc-capif-24.2.2-custom-values.yaml
file.
For more information on Helm test parameters, see Global Parameters.nfName: occapif image: name: nf_test tag: 24.2.2 registry: cgbu-cnc-comsvc-release-docker.dockerhub-phx.oci.oraclecorp.com/cgbu-ocudr-nftest config: logLevel: WARN timeout: 120 #Beyond this duration helm test will be considered failure resources: - horizontalpodautoscalers/v1 - deployments/v1 - configmaps/v1 - prometheusrules/v1 - serviceaccounts/v1 - poddisruptionbudgets/v1 - roles/v1 - statefulsets/v1 - persistentvolumeclaims/v1 - services/v1 - rolebindings/v1 complianceEnable: true
- Run the following command to perform the helm
test:
helm test <release_name> -n <namespace>
where:
<release_name> is the release name.
namespace is the deployment namespace where CAPIF is installed.
For example:Sample output:helm test occapif -n occapif
If the Helm test fails, see Oracle Communications Cloud Native Core, Network Exposure Function Troubleshooting Guide.NAME: occapif LAST DEPLOYED: Fri Nov 12 10:08:03 2020 NAMESPACE: occapif STATUS: deployed REVISION: 1 TEST SUITE: occapif-test Last Started: Fri Nov 12 10:41:25 2020 Last Completed: Fri Nov 12 10:41:34 2020 Phase: Succeeded
4.2 Customizing CAPIF
This section provides information about customizing CAPIF deployment in a cloud native environment.
The CAPIF deployment is customized by overriding the default values of various configurable parameters in the oc-capif-24.2.2-custom-values.yaml file.
Basic Configurations
- Once Docker platform configurations are done, proceed as per Configurable Parameters of CAPIF.
- Check Registry is in place and contains latest helm charts and jar as per the release for CAPIF node.
- Unzip the
Custom_Templates
file available in the extracted documentation release package to get the following file that is used to customize the deployment parameters during installation:- oc-capif-24.2.2-custom-values.yaml: This file is used to customize the deployment parameters during installation.
- oc-capif-24.2.2-custom-values-ats.yaml: This file is used to customize the CAPIF deployment with ATS.
- CapifAlertrules-24.2.2.yaml: This file is used for
Prometheus
. ocnef_dbtier_24.2.2_custom_values_24.2.2.yaml
: This file is used to install NEF with the recommended resource requirements for cnDBTier.
For more information on how to download the package from My Oracle Support, see Installation Package Download section.
- Customize the oc-capif-24.2.2-custom-values.yaml file.
- Save the updated oc-capif-24.2.2-custom-values.yaml file in the helm chart directory.
Note:
- All parameters mentioned as mandatory must be present in Custom Values YAML file and configured before the deployment.
- All fixed value parameters listed must be present in the Custom Values YAML file with the exact values as specified in this section.
For more information on the configurable parameters, see Configurable Parameters of CAPIF.
4.2.1 Configurable Parameters of CAPIF
This section includes information about the configuration parameters of CAPIF.
Note:
- Mandatory parameters must be configured before the CAPIF deployment.
- During installation, all the configurations would be read from Helm. Any configurations that support Update operation through REST API can only be updated using REST API or Console and any further updates to these configurations using Helm would be ignored. For further information, refer to Oracle Communications Cloud Native Core, Network Exposure Function REST Specification Guide and Configuring Network Exposure Function using the CNC Console chapter in Oracle Communications Cloud Native Core, Network Exposure Function User Guide.
4.2.1.1 Global Parameters
Table 4-4 Global Paramaters
Parameter | Description | Details |
---|---|---|
dockerRegistry |
Specifies the name of the Docker registry, which hosts CNC Policy docker images. | This is a docker registry running CNE bastion server where all OAuth docker
images are loaded.
This is a mandatory parameter. |
vendor |
The vendor name. |
This is a mandatory parameter. Default Value: Oracle |
serviceAccountName |
Name of the service account for CAPIF. |
This is an optional parameter. |
app_name |
Name of the application. |
This is a mandatory parameter. Default Value: occapif |
capifK8sNameSpace |
The Kubernetes namespace. | Default Value: &capifNameSpace occapif |
capifInstanceId |
The unique instance identifier of CAPIF. | This is a mandatory parameter.
The parameter must have unique for each CAPIF instance in a georedundant deployment. |
capifApiPrefix |
This is the prefix set for all CAPIF APIs. | This is a mandatory parameter.
Default Value: ' ' |
Jaeger Tracing Configurations | ||
jaegerTracingEnabled |
Specifies whether to enable or disable Jaeger Tracing | Default Value: false |
openTelemetry.jaeger.probabilisticSampler |
Specifies the Jaeger message sampler | Default Value: 0.5 |
openTelemetry.jaeger.httpExporter.host |
Specifies the host of Jaeger collector service | Default Value: jaeger-collector.cne-infra |
openTelemetry.jaeger.httpExporter.port |
Specifies the port of Jaeger collector service | Default Value: 4318 |
mTLS Configurations - External Gateways | ||
externalGWConfig.initSSL |
Enable this to enable mtls on external gateways for CAPIF that includes both Ingress GW (incoming ) and Egress GW (outgoing) of CAPIF. |
This value must be set to true only if the value for the
Default Value: true This is a mandatory parameter. Note: It is mandatory to configure all the
parameters under |
externalGWConfig.igw.enableIncomingHttp |
Enables HTTP requests on northbound side (extEnableIncomingHttp) | This value can be set to true only if the value for the
externalGWConfig.initSSL parameter is false.
Default Value: false This is a mandatory parameter. Note: Either
|
externalGWConfig.igw.publicHttpSignallingPort |
Default Value: 80 This is a mandatory parameter. |
|
externalGWConfig.igw.publicHttpsSignallingPort |
Default Value: 443 This is a mandatory parameter. |
|
externalGWConfig.egw.publicHttpsSignallingPort |
Default Value: 8080 This is a mandatory parameter. |
|
externalGWConfig.tls.privateKey.k8SecretName
|
Name of the Kubernetes secret object containing ext-capif-secret username and password |
This is an optional parameter. Default Value: ext-capif-secret |
externalGWConfig.tls.privateKey.k8NameSpace |
The namespace of the Kubernetes secret object containing ext-capif-secret. | This is an optional parameter. |
externalGWConfig.tls.privateKey.rsa.filename |
The filename containing rsa keydetail of the ext-capif-secret. | This is an optional parameter.
Default Value: rsa_private_key_pkcs1.pem |
externalGWConfig.tls.privateKey.ecdsa.filename |
The filename containing ecdsa keydetail of the ext-capif-secret. | This is an optional parameter.
Default Value: ssl_ecdsa_private_key.pem |
externalGWConfig.tls.certificate.k8SecretName |
Name of the Kubernetes secret object containing ext-capif-secret certificate. | This is an optional parameter.
Default Value: ext-capif-secret |
externalGWConfig.tls.certificate.k8NameSpace |
The namespace of the Kubernetes secret object containing ext-capif-secret certificate. |
This is a mandatory parameter. |
externalGWConfig.tls.certificate.rsa.filename |
The filename containing rsa keydetail of the ext-capif-secret certificate. |
This is a mandatory parameter. Default Value: tmp.cer |
externalGWConfig.tls.certificate.ecdsa.filename |
The filename containing ecdsa keydetail of the ext-capif-secret certificate. |
This is a mandatory parameter. Default Value: ssl_ecdsa_certificate.crt |
externalGWConfig.tls.caBundle.k8SecretName |
Name of the Kubernetes secret object containing ext-capif CA details for truststore. |
This is a mandatory parameter. Default Value: ext-capif-secret |
externalGWConfig.tls.caBundle.k8NameSpace |
The namespace of the Kubernetes secret object containing ext-capif CA details for truststore. |
This is a mandatory parameter. |
externalGWConfig.tls.caBundle.filename |
The filename containing ext-capif CA details for truststore. |
This is a mandatory parameter. Default Value: caroot.cer |
externalGWConfig.tls.keyStorePassword.k8SecretName |
Name of the Kubernetes secret object containing ext-capif KeyStore password |
This is a mandatory parameter. Default Value: ext-capif-secret |
externalGWConfig.tls.keyStorePassword.k8NameSpace |
The namespace of the Kubernetes secret object containing ext-capif KeyStore password |
This is a mandatory parameter. |
externalGWConfig.tls.keyStorePassword.filename |
The filename containing ext-capif KeyStore password |
This is a mandatory parameter. Default Value: key.txt |
externalGWConfig.tls.trustStorePassword.k8SecretName |
Name of the Kubernetes secret object containing ext-capif truststore password |
This is a mandatory parameter. Default Value: ext-capif-secret |
externalGWConfig.tls.trustStorePassword.k8NameSpace |
The namespace of the Kubernetes secret object containing ext-capif truststore password |
This is a mandatory parameter. |
externalGWConfig.tls.trustStorePassword.filename |
The filename containing ext-capif truststore password |
This is a mandatory parameter. Default Value: trust.txt |
externalGWConfig.tls.initialAlgorithm |
The initial algorithm selected by ext-capif. | Possible values are:
Default Value: RS256 This is a mandatory parameter. |
externalGWConfig.tls.keyType |
The selected key type. | Possible values are:
Default Value: rsakey This is a mandatory parameter. |
mTLS Configurations - Network Gateways | ||
networkGWConfig.initSSL |
Enable this to enable mTLS on 5GC gateways that includes both Ingress GW (incoming ) and Egress GW (outgoing) of CAPIF. |
This value must be set to true only if the value for the
Default Value: true This is a mandatory parameter. |
networkGWConfig.igw.enableIncomingHttp |
Enables HTTP requests on northbound side (extEnableIncomingHttp) | This value can be set to true only if the value for the
networkGWConfig.initSSL parameter is false.
Default Value: false This is a mandatory parameter. Note: Either
|
networkGWConfig.igw.publicHttpSignallingPort |
Default Value: 80 This is a mandatory parameter. |
|
networkGWConfig.igw.publicHttpsSignallingPort |
Default Value: 443 This is a mandatory parameter. |
|
networkGWConfig.egw.publicHttpsSignallingPort |
Default Value: 8080 This is a mandatory parameter. |
|
networkGWConfig.tls.privateKey.k8SecretName
|
Name of the Kubernetes secret object containing network-service-secret username and password |
This is an optional parameter. Default Value: network-service-secret This is a mandatory parameter when the value for
|
networkGWConfig.tls.privateKey.k8NameSpace |
The namespace of the Kubernetes secret object containing ext-network-service-secret. |
This is a mandatory parameter when the value for
|
networkGWConfig.tls.privateKey.rsa.filename |
The filename containing rsa keydetail of the network-service-secret. |
This is a mandatory parameter when the value for
Default Value: rsa_private_key_pkcs1.pem |
networkGWConfig.tls.privateKey.ecdsa.filename |
The filename containing ecdsa keydetail of the capif-secret. |
This is a mandatory parameter when the value for
Default Value: ssl_ecdsa_private_key.pem |
networkGWConfig.tls.certificate.k8SecretName |
Name of the Kubernetes secret object containing capif-secret certificate. |
This is a mandatory parameter when the value for
Default Value: network-service-secret |
networkGWConfig.tls.certificate.k8NameSpace |
The namespace of the Kubernetes secret object containing network-service-secret certificate. |
This is a mandatory parameter when the value for
|
networkGWConfig.tls.certificate.rsa.filename |
The filename containing rsa keydetail of the network-service-secret certificate. |
This is a mandatory parameter when the value for
Default Value: tmp.cer |
networkGWConfig.tls.certificate.ecdsa.filename |
The filename containing ecdsa keydetail of the network-service-secret certificate. |
This is a mandatory parameter when the value for
Default Value: ssl_ecdsa_certificate.crt |
networkGWConfig.tls.caBundle.k8SecretName |
Name of the Kubernetes secret object containing network-service-secret CA details for truststore. |
This is a mandatory parameter when the value for
Default Value: network-service-secret |
networkGWConfig.tls.caBundle.k8NameSpace |
The namespace of the Kubernetes secret object containing network-service-secret CA details for truststore. |
This is a mandatory parameter when the value for
|
networkGWConfig.tls.caBundle.filename |
The filename containing network-service-secret CA details for truststore. |
This is a mandatory parameter when the value for
Default Value: caroot.cer |
networkGWConfig.tls.keyStorePassword.k8SecretName |
Name of the Kubernetes secret object containing network-service-secret KeyStore password |
This is a mandatory parameter when the value for
Default Value: network-service-secret |
networkGWConfig.tls.keyStorePassword.k8NameSpace |
The namespace of the Kubernetes secret object containing network-service-secret KeyStore password |
This is a mandatory parameter when the value for
|
networkGWConfig.tls.keyStorePassword.filename |
The filename containing network-service-secret KeyStore password |
This is a mandatory parameter when the value for
Default Value: network-service-secret |
networkGWConfig.tls.trustStorePassword.k8SecretName |
Name of the Kubernetes secret object containing network-service-secret truststore password |
This is a mandatory parameter when the value for
|
networkGWConfig.tls.trustStorePassword.k8NameSpace |
The namespace of the Kubernetes secret object containing network-service-secret truststore password |
This is a mandatory parameter when the value for
Default Value:&trustStorePasswdSecretNameSpace default |
networkGWConfig.tls.trustStorePassword.filename |
The filename containing network-service-secret truststore password |
This is a mandatory parameter when the value for
Default Value:&trustStorePasswdFileName trust.txt |
networkGWConfig.tls.initialAlgorithm |
The initial algorithm selected by network-service-secret. | Possible values are:
Default Value: RS256 This is a
mandatory parameter when the value for
|
networkGWConfig.tls.keyType |
The selected key type. | Possible values are:
Default Value: rsakey This is a
mandatory parameter when the value for
|
Database Configurations | ||
database.dbName |
Name of the CAPIF service database |
This is a mandatory parameter. Note: The parameter value must be different for all the CAPIF instances in a georedundant depolyment. |
database.releaseDbName |
Name of the release database containing release version details |
This is a mandatory parameter. Default Value: capifcore Note: The parameter value must be different for all the CAPIF instances in a georedundant depolyment. |
database.dbPrimaryHost |
The primary host details for the database |
This is a mandatory parameter. |
database.dbSecondaryHost
|
The secondary host details for the database |
This is a mandatory parameter. |
database.dbPort |
Database Port details | |
database.appUserSecretName |
Name of the Kubernetes secret object containing Database username and password for an application user |
This is a mandatory parameter. Default Value: appuser-secret |
database.privilegedUserSecretName |
Name of the Kubernetes secret object containing Database username and password for an privileged user |
This is a mandatory parameter. Default Value: privilege-user-secret |
database.dbUNameLiteral |
Name of the key configured for "DB Username" in appuser-secret. |
This is a mandatory parameter. |
database.dbPwdLiteral |
Name of the key configured for "DB Password" in appuser-secret. |
This is a mandatory parameter. |
database.engine |
The database engine. | Any value other than InnoDB or NDBCluster leads to failure of database table
creation process.
This is a mandatory parameter. Default Value: NDBCluster |
extraContainers |
The flag to enable or disable injecting extra container. |
This is an optional parameter. Default Value: DISABLED |
debugToolContainerMemoryLimit |
Indicates the memory assigned for the debug tool container. | |
ingressCommonSvcName |
Default Value: igw |
|
egressCommonSvcName |
Default Value: egw |
|
Prometheus Scraping Configuration | ||
prometheusScrapingConfig.enabled |
Flag to enable or disable the Prometheus Scraping Configuration. |
This is a mandatory parameter. Default Value: true |
prometheusScrapingConfig.path |
Prometheus scrap path |
This is a mandatory parameter. Default Value: "/actuator/prometheus" |
type |
The type of service. | Possible values are:
This is an optional parameter. Default Value: LoadBalancer |
Custom Extension Global Configuration | ||
customExtension.allResources.labels |
Custom Labels that needs to be added to all the CAPIF Kubernetes resources | This can be used to add custom label(s) to all Kubernetes resources that are
created by CAPIF helm chart.
This is an optional parameter. |
customExtension.allResources.annotations |
Custom Annotations that needs to be added to all the CAPIF Kubernetes resources | This can be used to add custom annotation(s) to all Kubernetes resources that are
created by CAPIF helm chart.
This is an optional parameter. |
customExtension.lbServices.labels |
Custom Labels that needs to be added to CAPIF services that are considered as Load Balancer type | This can be used to add custom label(s) to all Load Balancer Type Services that
are created by CAPIF helm chart.
This is an optional parameter. |
customExtension.lbServices.annotations |
Custom Annotations that needs to be added to CAPIF services that are considered as Load Balancer type | This can be used to add custom annotation(s) to all Load Balancer Type Services
that are created by CAPIF helm chart.
This is an optional parameter. |
customExtension.lbDeployments.labels |
Custom Labels that needs to be added to CAPIF deployments that are associated to a service which is of Load Balancer type | This can be used to add the custom label(s) to the deployments that will be
created by CAPIF helm chart that are associated to a Load Balancer Type
Service.
This is an optional parameter. |
customExtension.lbDeployments.annotations |
Custom Annotations that needs to be added to CAPIF deployments that are associated to a service which is of Load Balancer type | This can be used to add the custom label(s) to the deployments that will be
created by CAPIF helm chart that are associated to a Load Balancer Type
Service.
This is an optional parameter. |
customExtension.nonlbServices.labels |
Custom Labels that needs to be added to CAPIF Services that are considered as not Load Balancer type | This can be used to add custom label(s) to all non-Load Balancer Type Services
that is created by CAPIF helm chart.
This is an optional parameter. |
customExtension.nonlbServices.annotations |
Custom Annotations that needs to be added to CAPIF Services that are considered as not Load Balancer type | This can be used to add custom annotation(s) to all non-Load Balancer Type
Services that is created by CAPIF helm chart.
This is an optional parameter. |
customExtension.nonlbDeployments.labels |
Custom Labels that needs to be added to CAPIF Deployments that are associated to a Service which is not of Load Balancer type | This can be used to add custom label(s) to all Deployments that is created by
CAPIF helm chart which are associated to a Service which if not of Load
Balancer Type.
This is an optional parameter. |
k8sResource.container.prefix |
Value that is prefixed to all the container names of CAPIF. | |
k8sResource.container.prefix |
Value that is suffixed to all the container names of CAPIF. | |
Helm Test Hook Configurations | ||
test.nfName |
Name of the NF |
This is a mandatory parameter. Default Value: occapif |
test.image.name |
Image name for the test container |
This is a mandatory parameter. Default Value: nf_test |
test.image.tag |
Tag for the test container image |
This is a mandatory parameter. |
test.config.logLevel |
Set the logging level |
This is a mandatory parameter. Default Value: WARN |
test.config.timeout |
Specify timeout until test container checks for Pod’s health |
This is a mandatory parameter. Default Value: 20 |
test.resources |
Specifies the helm test resources. |
Example:
horizontalpodautoscalers/v1 deployments/v1 configmaps/v1 prometheusrules/v1 serviceaccounts/v1 poddisruptionbudgets/v1 roles/v1 statefulsets/v1 persistentvolumeclaims/v1 services/v1 rolebindings/v1 |
test.complianceEnable |
Enables or disables the helm test logging feature. |
This is a mandatory parameter. Possible values are:
Default value: false |
Configurable Error Codes Configuration | ||
configurableErrorCodes.enabled |
Specifies if the configurable error codes must be enabled. |
This is a mandatory parameter. Possible values are:
Default value: false |
configurableErrorCodes.errorScenarios |
Contains a list of exceptionType, error codes, errorDescription, error causes, and errorTitle for which a failover must be performed. |
Example:
configurableErrorCodes: enabled: false errorScenarios: exceptionType: "ConnectException" errorCode: "503" errorDescription: "Connection failure" errorCause: "Connection Refused" errorTitle: "ConnectException" |
4.2.1.2 API Manager Parameters
Table 4-5 API Manager Parameters
Parameter | Description | Details |
---|---|---|
image.name |
occapif-apimgr image name |
This is an optional parameter. Default Value: oc_occapif_apimgr |
image.tag |
Tag name of image |
This is an optional parameter. Default Value: CAPIF Images |
image.pullPolicy |
Indicates if the image need to be pulled | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
resources.limits.cpu |
Maximum amount of CPU that Kubernetes allows the job resource to use. |
This is an optional parameter. Default Value: 4 |
resources.limits.memory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
This is an optional parameter. Default Value: 4Gi |
resources.requests.cpu |
The amount of CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
This is an optional parameter. Default Value: 4 |
resources.requests.memory |
The amount of memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
This is an optional parameter. Default Value: 4Gi |
resources.target.averageCpuUtil |
Target CPU utilization after which Horizontal Pod Autoscaler will be triggered. |
This is an optional parameter. Default Value: 60 |
minReplicas |
Minimum replicas to scale to maintain an average CPU utilization |
This is an optional parameter. Default Value: 2 |
maxReplicas |
Maximum replicas to scale to maintain an average CPU utilization |
This is an optional parameter. Default Value: 3 |
maxUnavailable |
Default Value: 25% |
|
maxSurge |
Default Value: 25% |
|
readinessProbe.initialDelaySeconds |
Informs the kubelet that it should wait xx second before performing the first probe |
This is an optional parameter. Default Value: 25 |
readinessProbe.periodSeconds |
specifies that the kubelet should perform a liveness probe every xx seconds |
This is an optional parameter. Default Value: 10 |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
This is an optional parameter. Default Value: 3 |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed |
This is an optional parameter. Default Value: 1 |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
This is an optional parameter. Default Value: 3 |
livenessProbe.initialDelaySeconds
|
tells the kubelet that it should wait xx second before performing the first probe |
This is an optional parameter. Default Value: 25 |
livenessProbe.periodSeconds
|
specifies that the kubelet should perform a liveness probe every xx seconds |
This is an optional parameter. Default Value: 10 |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
This is an optional parameter. Default Value: 5 |
livenessProbe.successThreshold
|
Minimum consecutive successes for the probe to be considered successful after having failed |
This is an optional parameter. Default Value: 1 |
livenessProbe.failureThreshold
|
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
This is an optional parameter. Default Value: 5 |
service.type
|
Type of the service. |
This is an optional parameter. Default Value: ClusterIP |
deployment.customExtension.labels |
Custom lables that needs to be added to API Manager service. | |
deployment.customExtension.annotations |
Custom annotations that needs to be added to API Manager service. | |
database.name |
The database name for the occapif-apimgr service | |
log.level.root |
Log level for root level logs | This is an optional parameter.
Default Value: WARN |
log.level.apimgr |
Log level for apimgr level logs | This is an optional parameter.
Default Value: INFO |
extraContainers |
Specifies if extra container must be used for DEBUG tool. | Possible Values are:
This is an optional parameter. Default Value: USE_GLOBAL_VALUE |
4.2.1.3 AF Manager Parameters
Table 4-6 AF Manager Parameters
Parameter | Description | Details |
---|---|---|
image.name |
Name of image. |
This is an optional parameter. Default Value: oc_capif_afmgr |
image.tag |
Tag name of image. |
This is an optional parameter. Default Value: CAPIF Images |
image.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
initContainersImage.name |
Image Name for AF Manager init container |
This is an optional parameter. Default Value: configurationinit |
initContainersImage.tag |
Tag Name for AF Manager init container |
This is an optional parameter. Default Value: CAPIF Images |
initContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
updateContainersImage.name |
Image Name for update container |
This is an optional parameter. Default Value: configurationupdate |
updateContainersImage.tag |
Tag Name for update container |
This is an optional parameter. Default Value: CAPIF Images |
updateContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
resources.limits.cpu |
Maximum amount of CPU that Kubernetes allows the job resource to use. |
This is an optional parameter. Default Value: 4 |
resources.limits.memory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
This is an optional parameter. Default Value: 4Gi |
resources.requests.cpu |
The amount of CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
This is an optional parameter. Default Value: 4 |
resources.requests.memory |
The amount of memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
This is an optional parameter. Default Value: 4Gi |
resources.target.averageCpuUtil |
Target CPU utilization after which Horizontal Pod Autoscaler will be triggered. |
This is an optional parameter. Default Value: 60 |
minReplicas |
Minimum replicas to scale to maintain an average CPU utilization |
This is an optional parameter. Default Value: 2 |
maxReplicas |
Maximum replicas to scale to maintain an average CPU utilization |
This is an optional parameter. Default Value: 3 |
maxUnavailable |
Default Value: 25% |
|
maxSurge |
Default Value: 25% |
|
readinessProbe.initialDelaySeconds |
tells the kubelet that it should wait xx second before performing the first probe |
This is an optional parameter. Default Value: 25 |
readinessProbe.periodSeconds |
specifies that the kubelet should perform a liveness probe every xx seconds |
This is an optional parameter. Default Value: 10 |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
This is an optional parameter. Default Value: 3 |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed |
This is an optional parameter. Default Value: 1 |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
This is an optional parameter. Default Value: 3 |
livenessProbe.initialDelaySeconds |
tells the kubelet that it should wait xx second before performing the first probe |
This is an optional parameter. Default Value: 25 |
livenessProbe.periodSeconds
|
specifies that the kubelet should perform a liveness probe every xx seconds |
This is an optional parameter. Default Value: 10 |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
This is an optional parameter. Default Value: 5 |
livenessProbe.successThreshold
|
Minimum consecutive successes for the probe to be considered successful after having failed |
This is an optional parameter. Default Value: 1 |
livenessProbe.failureThreshold
|
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
This is an optional parameter. Default Value: 5 |
deployment.customExtension.labels |
Custom lables that needs to be added to AF Manager deployment. | |
deployment.customExtension.annotations |
Custom annotations that needs to be added to AF Manager deployment. | |
initssl |
Type of the service. | This value must always be set as true.
Default Value: true |
service.type |
The type of the service. | |
service.customExtension.labels |
Custom lables that needs to be added to AF Manager service. | |
service.customExtension.annotations |
Custom annotations that needs to be added to AF Manager service. | |
ssl.keyType |
The selected key type. | |
ssl.tlsVersion |
The TLS version. | |
ssl.privateKey.k8SecretName |
Secret name that contains CAPIF AF Manager Private Key | |
ssl.privateKey.k8NameSpace |
Namespace in which k8SecretName is present | |
ssl.privateKey.rsa.filename |
CAPIF's Private Key (RSA type) file name | |
ssl.privateKey.ecdsa.filename |
CAPIF's Private Key (ECDSA type) file name | |
certificate.k8SecretName |
Secret name that contains CAPIF's certificate for HTTPS | |
certificate.k8NameSpace |
Namespace in which CAPIF's certificate is present | |
certificate.rsa.filename |
CAPIF's Certificate (RSA type) file name | |
certificate.ecdsa.filename |
CAPIF's Certificate (ECDSA type) file name | |
caBundle.k8SecretName |
Secret name that contains CAPIF's CA details for HTTPS | |
caBundle.k8NameSpace |
Namespace in which CAPIF's CA details is present | |
caBundle.filename |
CAPIF's CA bundle filename | |
keyStorePassword.k8SecretName |
Secret name that contains keyStorePassword | |
keyStorePassword.k8NameSpace |
Namespace in which CAPIF's keystore password is present | |
keyStorePassword.filename |
CAPIF's Key Store password Filename | |
trustStorePassword.k8SecretName |
Secret name that contains trustStorePassword | |
trustStorePassword.k8NameSpace |
Namespace in which trustStorePassword is present | |
trustStorePassword.filename |
CAPIF's trustStorePassword Filename | |
ssl.initialAlgorithm |
Initial Algorithm for HTTPS | |
database.name |
The CAPIF AF Manager database name | |
extraContainers |
Specifies if extra container must be used for DEBUG tool. | Possible Values are:
This is an optional parameter. Default Value: USE_GLOBAL_VALUE |
log.level.root |
Log level for root logs | |
log.level.afmgr |
Log level for AF Manager service logs | |
log.level.updateContainer |
Log level for updateContainer logs | |
accessToken.expiryTime |
To set the validity of the token to access NEF services. | This is a mandatory parameter.
Default value: 3600 seconds Note:
|
4.2.1.4 Event Manager Parameters
Table 4-7 Event Manager Parameters
Parameter | Description | Details |
---|---|---|
image.name |
Name of image. |
This is an optional parameter. Default Value: oc_capif_eventmgr |
image.tag |
Tag name of image. |
This is an optional parameter. Default Value: CAPIF Images |
image.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
httpTwoEnabled |
Default Value: true |
|
log.level.root |
Log level for root logs |
Default Value: DEBUG |
log.level.events |
Log level for Event Manager service logs |
Default Value: DEBUG |
resources.limits.cpu |
Maximum amount of CPU that Kubernetes allows the job resource to use. |
This is an optional parameter. Default Value: 4 |
resources.limits.memory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
This is an optional parameter. Default Value: 4Gi |
resources.requests.cpu |
The amount of CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
This is an optional parameter. Default Value: 4 |
resources.requests.memory |
The amount of memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
This is an optional parameter. Default Value: 4Gi |
resources.target.averageCpuUtil |
Target CPU utilization after which Horizontal Pod Autoscaler will be triggered. |
This is an optional parameter. Default Value: 60 |
minReplicas |
Minimum replicas to scale to maintain an average CPU utilization |
This is an optional parameter. Default Value: 2 |
maxReplicas |
Maximum replicas to scale to maintain an average CPU utilization |
This is an optional parameter. Default Value: 2 |
maxUnavailable |
Default Value: 25% |
|
maxSurge |
Default Value: 25% |
|
readinessProbe.initialDelaySeconds |
tells the kubelet that it should wait xx second before performing the first probe |
This is an optional parameter. Default Value: 40 |
readinessProbe.periodSeconds |
specifies that the kubelet should perform a liveness probe every xx seconds |
This is an optional parameter. Default Value: 10 |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
This is an optional parameter. Default Value: 3 |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed |
This is an optional parameter. Default Value: 1 |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
This is an optional parameter. Default Value: 3 |
livenessProbe.initialDelaySeconds
|
tells the kubelet that it should wait xx second before performing the first probe |
This is an optional parameter. Default Value: 40 |
livenessProbe.periodSeconds
|
specifies that the kubelet should perform a liveness probe every xx seconds |
This is an optional parameter. Default Value: 10 |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
This is an optional parameter. Default Value: 5 |
livenessProbe.successThreshold
|
Minimum consecutive successes for the probe to be considered successful after having failed |
This is an optional parameter. Default Value: 1 |
livenessProbe.failureThreshold
|
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
This is an optional parameter. Default Value: 5 |
extraContainers |
Specifies if extra container must be used for DEBUG tool. | |
deployment.customExtension.labels |
Custom lables that needs to be added to Event Manager deployment. | |
deployment.customExtension.annotations |
Custom annotations that needs to be added to Event Manager deployment. | |
service.type |
The type of the service. | |
service.customExtension.labels |
Custom labels that needs to be added to Event Manager service. | |
service.customExtension.annotations |
Custom annotations that needs to be added to Event Manager service. | |
auditconf.purgeExpSubsCron |
Set the Cron job scheduler to delete the expired events. |
Default Value: "0 0 1 * * ?" (1 AM everyday) |
4.2.1.5 Ingress Gateway Parameters
The following table describes the parameters for the External Ingress GW and Network Ingress GW services.
Table 4-8 External Ingress
Gateway (ingress-gateway
) Parameters
Parameter | Description | Details |
---|---|---|
global.type |
The service type that will be used for this deployment. | It is not recommended to change the service
type.
Default Value: LoadBalancer |
global.staticIpAddressEnabled |
Specifies if static load balancer IP needs to be set |
Default Value: false |
global.staticIpAddress |
Static IP address assigned to the Load Balancer from the external load balancer IP pool. | |
global.staticNodePortEnabled |
Specifies if static node port needs to be set |
Default Value: false |
global.staticHttpNodePort |
Static HTTP Node Port | |
global.staticHttpsNodePort |
Static HTTPS Node Port | |
global.publicHttpSignalingPort |
HTTP service port on which CAPIF Ingress Gateway is exposed | |
global.publicHttpsSignallingPort |
HTTPS service port on which CAPIF Ingress Gateway is exposed | |
enableIncomingHttp |
This flag is for enabling/disabling HTTP/2.0 (insecure) in Ingress Gateway. | If the value is set to false, EG will not
accept any HTTP/2.0 (insecure) Traffic.
Note: This parameter is applicable only for the External Ingress GW. |
enableIncomingHttps |
This flag is for enabling/disabling HTTPS/2.0 (secure) in Ingress Gateway. |
Note: This parameter is applicable only for the External Ingress GW. If the value is set to false, EG will not accept any HTTPS/2.0 (secure) Traffic |
image.name |
Ingress Gateway image name |
This is an optional parameter. Default Value: ocingress_gateway |
image.tag |
Ingress Gateway image tag |
This is an optional parameter. Default Value: CAPIF Images |
image.pullPolicy |
Indicates if the image need to be pulled | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
initContainersImage.name |
Image Name for Ingress GW init container |
This is an optional parameter. Default Value: configurationinit |
initContainersImage.tag |
Tag Name for Ingress Gateway init container |
This is an optional parameter. Default Value: CAPIF Images |
initContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
updateContainersImage.name |
Image Name for Ingress Gateway update container |
This is an optional parameter. Default Value: configurationupdate |
updateContainersImage.tag |
Tag Name for update container |
This is an optional parameter. Default Value: CAPIF Images |
updateContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
ports.actuatorPort |
Default Value: 9090 |
|
ports.containerPort |
Default Value: 8081 |
|
ports.containersslPort |
Default Value: 8443 |
|
jaegerTracingEnabled |
Specifies whether to enable or disable Jaeger Tracing at Ingress Gateway. | When this flag is set to true, make sure to
update all Jaeger related attributes with the correct
values.
Default Value: false |
openTelemetry.jaeger.httpExporter.host |
Specifies the host name of Jaeger Agent service | Default Value: jaeger-collector.cne-infra |
openTelemetry.jaeger.httpExporter.port |
Specifies the port of Jaeger Agent service | Default Value: 4318 |
openTelemetry.jaeger.probabilisticSampler |
Specifies the Jaeger message sampler | Default Value: 0.5 |
service.ssl.tlsVersion |
The TLS version. | |
service.ssl.privateKey.k8SecretName |
Secret name that contains CAPIF Ingress Gateway Private Key | |
service.ssl.privateKey.k8NameSpace |
Namespace in which k8SecretName is present | |
service.ssl.privateKey.rsa.filename |
CAPIF's Private Key (RSA type) file name | |
service.ssl.privateKey.ecdsa.filename |
CAPIF Ingress Gateway Private Key (ecdsa type) file name | |
service.certificate.k8SecretName |
Secret name that contains CAPIF Ingress Gateway certificate for HTTPS | |
service.certificate.k8NameSpace |
Namespace in which k8SecretName is present | |
service.certificate.rsa.filename |
CAPIF Ingress Gateway Certificate (RSA type) file name | |
service.certificate.ecdsa.filename |
CAPIF Ingress Gateway Certificate (ECDSA type) file name | |
service.caBundle.k8SecretName |
Secret name that contains CAPIF Ingress Gateway's CA details for HTTPS | |
service.caBundle.k8NameSpace |
Namespace that contains CAPIF Ingress Gateway's CA details for HTTPS | |
caBundle.filename |
CAPIF Ingress Gateway's CA bundle filename | |
service.keyStorePassword.k8SecretName |
Secret name that contains keyStorePassword | |
service.keyStorePassword.k8NameSpace |
Namespace in which CAPIF Ingress Gateway's keystore password is present | |
service.keyStorePassword.filename |
CAPIF Ingress Gateway's Key Store password Filename | |
service.trustStorePassword.k8SecretName |
Secret name that contains trustStorePassword | |
service.trustStorePassword.k8NameSpace |
Namespace in which trustStorePassword is present | |
service.trustStorePassword.filename |
CAPIF Ingress Gateway's trustStorePassword Filename | O |
service.initialAlgorithm |
Initial Algorithm for HTTPS | |
service.customExtension.labels |
Custom lables that needs to be added to Ingress Gateway service. | |
service.customExtension.annotations |
Custom annotations that needs to be added to Ingress Gateway service. | |
deployment.customExtension.labels |
Custom lables that needs to be added to Ingress Gateway deployment. | |
deployment.customExtension.annotations |
Custom annotations that needs to be added to Ingress Gateway deployment. | |
log.level.root |
Log level for root logs | Possible values are:
|
log.level.ingress |
Log level for ingress logs | Possible values are:
|
log.level.oauth |
Log level for oauth logs | Possible values are:
|
log.level.updateContainer |
Log level for updateContainer logs | Possible values are:
|
log.level.configclient |
Log level for configclient logs | Possible values are:
|
log.level.hook |
Log level for hook logs | Possible values are:
|
log.level.cncc.security |
Log level for CNC Console security | Possible values are:
|
readinessProbe.initialDelaySeconds |
Tells the kubelet that it should wait xx second before performing the first probe |
Default Value: 30 |
readinessProbe.periodSeconds |
Specifies that the kubelet should perform a liveness probe every xx seconds |
Default Value: 3 |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
Default Value: 10 |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed |
Default Value: 1 |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
Default Value: 3 |
livenessProbe.initialDelaySeconds |
tells the kubelet that it should wait xx second before performing the first probe |
Default Value: 30 |
livenessProbe.periodSeconds |
specifies that the kubelet should perform a liveness probe every xx seconds |
Default Value: 3 |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
Default Value: 15 |
livenessProbe.successThreshold
|
Minimum consecutive successes for the probe to be considered successful after having failed |
Default Value: 1 |
livenessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
Default Value: 3 |
resources.limits.cpu |
Maximum amount of CPU that Kubernetes allows the job resource to use. |
Default Value: 4 |
resources.limits.initServiceCpu |
Maximum amount of initServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.updateServiceCpu |
Maximum amount of updateServiceCpu that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.memory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 4Gi |
resources.limits.initServiceMemory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.limits.updateServiceMemory |
Maximum amount of updateServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.requests.cpu |
The amount of CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4 |
resources.requests.initServiceCpu |
The amount of initServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.updateServiceCpu |
The amount of updateServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.memory |
The amount of memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4Gi |
resources.requests.initServiceMemory |
The amount of initServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.requests.updateServiceMemory |
The amount of updateServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.target.averageCpuUtil |
Target CPU utilization after which Horizontal Pod Autoscaler will be triggered. |
Default Value: 80 |
requestTimeOut |
Specifies the response time for the server to wait before timeout. This value should be updated based on the network latency. | Default Value: 2500 ms |
minReplicas |
Minimum replicas to scale to maintain an average CPU utilization |
Default Value: 2 |
maxReplicas |
Maximum replicas to scale to maintain an average CPU utilization |
Default Value: 5 |
Table 4-9 Network Ingress
Gateway (ingressgateway
) Parameters
Parameter | Description | Details |
---|---|---|
global.type |
The service type that will be used for this deployment. | It is not recommended to change the service
type.
Default Value: LoadBalancer |
global.staticIpAddressEnabled |
Specifies if static load balancer IP needs to be set |
Default Value: false |
global.staticIpAddress |
Static IP address assigned to the Load Balancer from the external load balancer IP pool. | |
global.staticNodePortEnabled |
Specifies if static node port needs to be set |
Default Value: false |
global.staticHttpNodePort |
Static HTTP Node Port | |
global.staticHttpsNodePort |
Static HTTPS Node Port | |
global.publicHttpSignalingPort |
HTTP service port on which CAPIF Ingress Gateway is exposed | |
global.publicHttpsSignallingPort |
HTTPS service port on which CAPIF Ingress Gateway is exposed | |
enabled |
Specifies if the service is enabled. |
Default Value: true |
image.name |
Ingress Gateway image name |
This is an optional parameter. Default Value: ocingress_gateway |
image.tag |
Ingress Gateway image tag |
This is an optional parameter. Default Value: CAPIF Images |
image.pullPolicy |
Indicates if the image need to be pulled | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
initContainersImage.name |
Image Name for Ingress GW init container |
This is an optional parameter. Default Value: configurationinit |
initContainersImage.tag |
Tag Name for Ingress Gateway init container |
This is an optional parameter. Default Value: CAPIF Images |
initContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
updateContainersImage.name |
Image Name for Ingress Gateway update container |
This is an optional parameter. Default Value: configurationupdate |
updateContainersImage.tag |
Tag Name for update container |
This is an optional parameter. Default Value: CAPIF Images |
updateContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
extraContainers |
Controls the usage of extra container (DEBUG tool). |
Default Value: USE_GLOBAL_VALUE |
service.ssl.tlsVersion |
The TLS version. | |
service.ssl.privateKey.k8SecretName |
Secret name that contains CAPIF Ingress Gateway Private Key | |
service.ssl.privateKey.k8NameSpace |
Namespace in which k8SecretName is present | |
service.ssl.privateKey.rsa.filename |
CAPIF's Private Key (RSA type) file name | |
service.ssl.privateKey.ecdsa.filename |
CAPIF Ingress Gateway Private Key (ecdsa type) file name | |
service.certificate.k8SecretName |
Secret name that contains CAPIF Ingress Gateway certificate for HTTPS | |
service.certificate.k8NameSpace |
Namespace in which k8SecretName is present | |
service.certificate.rsa.filename |
CAPIF Ingress Gateway Certificate (RSA type) file name | |
service.certificate.ecdsa.filename |
CAPIF Ingress Gateway Certificate (ECDSA type) file name | |
service.caBundle.k8SecretName |
Secret name that contains CAPIF Ingress Gateway's CA details for HTTPS | |
service.caBundle.k8NameSpace |
Namespace that contains CAPIF Ingress Gateway's CA details for HTTPS | |
caBundle.filename |
CAPIF Ingress Gateway's CA bundle filename | |
service.keyStorePassword.k8SecretName |
Secret name that contains keyStorePassword | |
service.keyStorePassword.k8NameSpace |
Namespace in which CAPIF Ingress Gateway's keystore password is present | |
service.keyStorePassword.filename |
CAPIF Ingress Gateway's Key Store password Filename | |
service.trustStorePassword.k8SecretName |
Secret name that contains trustStorePassword | |
service.trustStorePassword.k8NameSpace |
Namespace in which trustStorePassword is present | |
service.trustStorePassword.filename |
CAPIF Ingress Gateway's trustStorePassword Filename | O |
service.initialAlgorithm |
Initial Algorithm for HTTPS | |
service.customExtension.labels |
Custom lables that needs to be added to Ingress Gateway service. | |
service.customExtension.annotations |
Custom annotations that needs to be added to Ingress Gateway service. | |
deployment.customExtension.labels |
Custom lables that needs to be added to Ingress Gateway deployment. | |
deployment.customExtension.annotations |
Custom annotations that needs to be added to Ingress Gateway deployment. | |
ports.containerPort |
The container port detail. |
Default Value: 8081 |
ports.containersslPort |
The SSL container port detail. |
Default Value: 8443 |
ports.actuatorPort |
The actuator port detail. |
Default Value: 9090 |
log.level.root |
Log level for root logs | Possible values are:
|
log.level.ingress |
Log level for ingress logs | Possible values are:
|
log.level.oauth |
Log level for oauth logs | Possible values are:
|
log.level.updateContainer |
Log level for updateContainer logs | Possible values are:
|
log.level.configclient |
Log level for configclient logs | Possible values are:
|
log.level.hook |
Log level for hook logs | Possible values are:
|
log.level.cncc.security |
Log level for CNC Console security | Possible values are:
|
log.traceIdGenerationEnabled |
Default Value: true |
|
startupProbe.initialDelaySeconds |
Tells the kubelet that it should wait xx second before performing the first probe |
Default Value: 30 |
startupProbe.periodSeconds |
Specifies that the kubelet should perform a liveness probe every xx seconds |
Default Value: 3 |
startupProbe.timeoutSeconds |
Number of seconds after which the probe times out |
Default Value: 10 |
startupProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed |
Default Value: 1 |
startupProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
Default Value: 6 |
readinessProbe.initialDelaySeconds |
Tells the kubelet that it should wait xx second before performing the first probe |
Default Value: 30 |
readinessProbe.periodSeconds |
Specifies that the kubelet should perform a liveness probe every xx seconds |
Default Value: 3 |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
Default Value: 10 |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed |
Default Value: 1 |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
Default Value: 3 |
livenessProbe.initialDelaySeconds |
tells the kubelet that it should wait xx second before performing the first probe |
Default Value: 30 |
livenessProbe.periodSeconds |
specifies that the kubelet should perform a liveness probe every xx seconds |
Default Value: 3 |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out |
Default Value: 15 |
livenessProbe.successThreshold
|
Minimum consecutive successes for the probe to be considered successful after having failed |
Default Value: 1 |
livenessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up |
Default Value: 3 |
resources.limits.cpu |
Maximum amount of CPU that Kubernetes allows the job resource to use. |
Default Value: 4 |
resources.limits.initServiceCpu |
Maximum amount of initServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.updateServiceCpu |
Maximum amount of updateServiceCpu that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.memory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 4Gi |
resources.limits.initServiceMemory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.limits.updateServiceMemory |
Maximum amount of updateServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.requests.cpu |
The amount of CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4 |
resources.requests.initServiceCpu |
The amount of initServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.updateServiceCpu |
The amount of updateServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.memory |
The amount of memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4Gi |
resources.requests.initServiceMemory |
The amount of initServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.requests.updateServiceMemory |
The amount of updateServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.target.averageCpuUtil |
Target CPU utilization after which Horizontal Pod Autoscaler will be triggered. |
Default Value: 80 |
requestTimeOut |
Specifies the response time for the server to wait before timeout. This value should be updated based on the network latency. | Default Value: 2500 ms |
minAvailable |
Number of Pods must always be available, even during a disruption. |
Default Value: 2 |
minReplicas |
Minimum replicas to scale to maintain an average CPU utilization |
Default Value: 2 |
maxReplicas |
Maximum replicas to scale to maintain an average CPU utilization |
Default Value: 5 |
jaegerTracingEnabled |
Specifies whether to enable or disable Jaeger Tracing at Ingress Gateway. | When this flag is set to true, make sure to
update all Jaeger related attributes with the correct
values.
Default Value: false |
openTelemetry.jaeger.httpExporter.host |
Specifies the host name of Jaeger Agent service | Default Value: jaeger-collector.cne-infra |
openTelemetry.jaeger.httpExporter.port |
Specifies the port of Jaeger Agent service | Default Value: 4318 |
openTelemetry.jaeger.probabilisticSampler |
Specifies the Jaeger message sampler | Default Value: 0.5 |
oauthValidatorEnabled |
Specifies if the OAuth validator must be enabled. | |
nfType |
NFType of service consumer. | |
nfInstanceId |
NF InstanceId of service consumer. | |
producerScope |
A comma-seperated list of services hosted by the service producer. | This is a mandatory field if
oauthValidatorEnabled is set to
true.
|
allowedClockSkewSeconds |
Set this value if clock on the parsing NF(producer) is not perfectly in sync with the clock on the NF(consumer) that created the JWT. | This is a mandatory field if
oauthValidatorEnabled is set to
true.
|
enableInstanceIdConfigHook |
||
nrfPublicKeyKubeSecret |
It consist of the name of the secret that stores the NRF publicKey. | |
nrfPublicKeyKubeNamespace |
It consist of the name of the namespace of the NRF publicKey secret. | |
validationType |
The validation type. | Possible values:
oauthValidatorEnabled is set to
true.
|
producerPlmnMNC |
It consist of the name of the MNC of the service producer. | |
producerPlmnMCC |
It consist of the name of the MCC of the service producer. | |
oauthErrorConfigForValidationFailure |
The error configurations for the OAuth validation failure. | |
oauthErrorConfigOnTokenAbsence |
The error configurations for the absense of token failure. | |
initssl |
||
enableIncomingHttp |
This flag is for enabling/disabling HTTP/2.0 (insecure TLS) in Egress Gateway. | |
enableIncomingHttps |
This flag is for enabling/disabling HTTPS/2.0 (secure TLS) in Egress Gateway. | |
enableOutgoingHttps |
This flag is for enabling/disabling HTTPS/2.0 (secured TLS) in Egress Gateway. | |
serviceMeshCheck |
Specifiies if Service Mesh would be present where NEF is deployed | |
isSbiTimerEnabled |
Specifies if SBI Timer is enabled | |
autoRedirect |
Allows Ingress GW to redirect to the URI present in the location header | |
metricPrefix |
Prefix to be added to all the metrics in the ingress gateway | |
metricSuffix |
Suffix to be added to all the metrics in the ingress gateway | |
pingDelay |
Delay between pings in seconds | |
ingressGwCertReloadEnabled |
Specifies if Ingress GW certificate can be reloaded | |
ingressGwCertReloadPath |
Certificate reload path | |
maxConcurrentPushedStreams |
Default value: 1000 | |
maxRequestsQueuedPerDestination |
||
maxConnectionsPerDestination |
Applicable if
serviceMeshCheck is enabled.
|
|
maxConnectionsPerIp |
Applicable if
serviceMeshCheck is enabled.
|
|
connectionTimeout |
Applicable if
serviceMeshCheck is enabled.
|
|
requestTimeout |
Specifies the response time for the server to wait before timeout. This value should be updated based on the network latency. | Applicable if
serviceMeshCheck is enabled.
|
jettyGracefulRequestTermination |
Applicable if
serviceMeshCheck is enabled.
|
4.2.1.6 Egress Gateway Parameters
The following table describes the parameters for the External Egress GW and Network Egress GW services.
Table 4-10 External Egress
Gateway (egress-gateway
) Parameters
Parameter | Description | Details |
---|---|---|
serviceEgressGateway.port |
||
deploymentEgressGateway.image.name |
Egress Gateway image name |
This is an optional parameter. Default Value: ocegress_gateway |
deploymentEgressGateway.image.tag |
Egress Gateway image tag |
This is an optional parameter. Default Value: Table 4-2 |
deploymentEgressGateway.image.pullPolicy |
Indicates if the image need to be pulled | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
initContainersImage.name |
Image Name for Egress Gateway init container |
This is an optional parameter. Default Value: configurationinit |
initContainersImage.tag |
Tag Name for Egress Gateway init container |
This is an optional parameter. Default Value: CAPIF Images |
initContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
updateContainersImage.name |
Image Name for Egress Gateway update container |
This is an optional parameter. Default Value: configurationupdate |
updateContainersImage.tag |
Tag Name for update container |
This is an optional parameter. Default Value: Table 5-2 |
updateContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
extraContainers |
Specifies if extra container must be used for DEBUG tool. | |
initssl |
This value must always be set as true.
Default Value: true |
|
enableOutgoingHttps |
This flag is for enabling/disabling HTTPS/2.0 (secured TLS) in Egress Gateway. | |
pingDelay |
Delay between pings in seconds.When set to <=0,ping is disabled | |
startupProbe.initialDelaySeconds |
Tells the kubelet that it should wait xx second before performing the startup probe | |
startupProbe.periodSeconds |
specifies that the kubelet should perform a startupProbe probe every xx seconds | |
startupProbe.timeoutSeconds |
||
startupProbe.successThreshold |
||
startupProbe.failureThreshold |
||
readinessProbe.initialDelaySeconds |
||
readinessProbe.periodSeconds |
specifies that the kubelet should perform a readinessProbe probe every xx seconds | |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out | |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed | |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up | |
livenessProbe.initialDelaySeconds |
tells the kubelet that it should wait xx second before performing the first probe | |
livenessProbe.periodSeconds
|
specifies that the kubelet should perform a liveness probe every xx seconds | |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out | |
livenessProbe.successThreshold
|
Minimum consecutive successes for the probe to be considered successful after having failed | |
livenessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up | |
log.level.root |
Log level for root logs | |
log.level.egress |
Log level for egress logs | |
log.level.oauth |
Log level for oauth logs | |
log.level.updateContainer |
Log level for updateContainer logs | |
log.level.hook |
Log level for hook logs | |
service.ssl.keyType |
The selected key type. | |
service.ssl.tlsVersion |
The TLS version. | |
service.ssl.privateKey.k8SecretName |
Secret name that contains NEF Ingress Gateway Private Key |
Default Value: TLSv1.2 |
service.ssl.privateKey.k8NameSpace |
Namespace in which k8SecretName is present | |
service.ssl.privateKey.rsa.filename |
NEF's Private Key (RSA type) file name | |
service.ssl.privateKey.ecdsa.filename |
NEF Egress Gateway Private Key (ecdsa type) file name | |
service.certificate.k8SecretName |
Secret name that contains NEF Egress Gateway certificate for HTTPS | |
service.certificate.k8NameSpace |
Namespace in which k8SecretName is present | |
service.certificate.rsa.filename |
NEF Egress Gateway Certificate (RSA type) file name | |
service.certificate.ecdsa.filename |
NEF Egress Gateway Certificate (ECDSA type) file name | |
service.caBundle.k8SecretName |
Secret name that contains NEF Egress Gateway's CA details for HTTPS | |
service.caBundle.k8NameSpace |
Namespace that contains NEF Egress Gateway's CA details for HTTPS | |
expgw-apirouter.caBundle.filename |
NEF Egress Gateway's CA bundle filename | |
service.keyStorePassword.k8SecretName |
Secret name that contains keyStorePassword | |
service.keyStorePassword.k8NameSpace |
Namespace in which NEF Egress Gateway's keystore password is present | |
service.keyStorePassword.filename |
NEF Egress Gateway's Key Store password Filename | |
service.trustStorePassword.k8SecretName |
Secret name that contains trustStorePassword | |
service.trustStorePassword.k8NameSpace |
Namespace in which trustStorePassword is present | |
service.trustStorePassword.filename |
NEF Egress Gateway's trustStorePassword Filename | O |
service.initialAlgorithm |
Initial Algorithm for HTTPS | |
service.customExtension.labels |
Custom lables that needs to be added to Egress Gateway service. | |
service.customExtension.annotations |
Custom annotations that needs to be added to Egress Gateway service. | |
deployment.customExtension.labels |
Custom lables that needs to be added to Egress Gateway deployment. | |
deployment.customExtension.annotations |
Custom annotations that needs to be added to Egress Gateway deployment. | |
globalRemoveRequestHeader |
Attribute for blocklisting (removing) a request header at global level. | |
globalRemoveResponseHeader |
Attribute for blocklisting (removing) a response header at global level. | |
resources.limits.cpu |
Maximum amount of CPU that Kubernetes allows the job resource to use. |
Default Value: 4 |
resources.limits.initServiceCpu |
Maximum amount of initServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.updateServiceCpu |
Maximum amount of updateServiceCpu that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.commonHooksCpu |
Maximum amount of common hook CPU that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.memory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 4Gi |
resources.limits.initServiceMemory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.limits.updateServiceMemory |
Maximum amount of updateServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.limits.commonHooksMemory |
Maximum amount of hook service memory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.requests.cpu |
The amount of CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4 |
resources.requests.initServiceCpu |
The amount of initServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.updateServiceCpu |
The amount of updateServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.commonHooksCpu |
The amount of hook service CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.memory |
The amount of memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4Gi |
resources.requests.initServiceMemory |
The amount of initServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.requests.updateServiceMemory |
The amount of updateServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.requests.commonHooksMemory |
The amount of hook service memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.target.averageCpuUtil |
Target CPU utilization after which Horizontal Pod Autoscaler will be triggered. |
Default Value: 60 |
minAvailable |
Minimum available pods |
Default Value: 2 |
minReplicas |
Minimum replicas to scale to maintain an average CPU utilization |
Default Value: 2 |
maxReplicas |
Maximum replicas to scale to maintain an average CPU utilization |
Default Value: 5 |
jettyIdleTimeout |
||
metricPrefix |
||
metricSuffix |
||
autoRedirect |
||
isSbiTimerEnabled |
||
sbiTimerTimezone |
||
egressGwCertReloadEnabled |
||
egressGwCertReloadPath |
||
jaegerTracingEnabled |
Specifies whether to enable or disable Jaeger Tracing at Engress Gateway. | When this flag is set to true, make sure to
update all Jaeger related attributes with the correct
values.
Default Value: false |
openTelemetry.jaeger.httpExporter.host |
Specifies the host of Jaeger collector service | Default Value: jaeger-collector.cne-infra |
openTelemetry.jaeger.httpExporter.port |
Specifies the port of Jaeger collector service | Default Value: 4318 |
openTelemetry.jaeger.probabilisticSampler |
Specifies the Jaeger message sampler | Default Value: 0.5 |
Table 4-11 Network Egress Gateway
(egressgateway
) Parameters
Parameter | Description | Details |
---|---|---|
cmName |
Default Value: egressgateway |
|
prefix |
Default Value: network |
|
serviceEgressGateway.port |
||
serviceEgressGateway.sslPort |
||
serviceEgressGateway.actuatorPort |
||
deploymentEgressGateway.image.name |
Egress Gateway image name |
This is an optional parameter. Default Value: ocegress_gateway |
deploymentEgressGateway.image.tag |
Egress Gateway image tag |
This is an optional parameter. Default Value: CAPIF Images |
deploymentEgressGateway.image.pullPolicy |
Indicates if the image need to be pulled | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
initContainersImage.name |
Image Name for Egress Gateway init container |
This is an optional parameter. Default Value: configurationinit |
initContainersImage.tag |
Tag Name for Egress Gateway init container |
This is an optional parameter. Default Value: CAPIF Images |
initContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
updateContainersImage.name |
Image Name for Egress Gateway update container |
This is an optional parameter. Default Value: configurationupdate |
updateContainersImage.tag |
Tag Name for update container |
This is an optional parameter. Default Value: Table 5-2 |
updateContainersImage.pullPolicy |
Image pull policy | Possible Values are:
This is an optional parameter. Default Value: IfNotPresent |
extraContainers |
Specifies if extra container must be used for DEBUG tool. | |
initssl |
This value must always be set as true.
Default Value: true |
|
enableOutgoingHttps |
This flag is for enabling/disabling HTTPS/2.0 (secured TLS) in Egress Gateway. | |
pingDelay |
Delay between pings in seconds.When set to <=0,ping is disabled | |
httpsTargetOnly |
Select SCP instances for https list only | |
httpRuriOnly |
||
startupProbe.initialDelaySeconds |
Tells the kubelet that it should wait xx second before performing the startup probe | |
startupProbe.periodSeconds |
specifies that the kubelet should perform a startupProbe probe every xx seconds | |
startupProbe.timeoutSeconds |
||
startupProbe.successThreshold |
||
startupProbe.failureThreshold |
||
readinessProbe.initialDelaySeconds |
||
readinessProbe.periodSeconds |
specifies that the kubelet should perform a readinessProbe probe every xx seconds | |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out | |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed | |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up | |
livenessProbe.initialDelaySeconds |
tells the kubelet that it should wait xx second before performing the first probe | |
livenessProbe.periodSeconds
|
specifies that the kubelet should perform a liveness probe every xx seconds | |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out | |
livenessProbe.successThreshold
|
Minimum consecutive successes for the probe to be considered successful after having failed | |
livenessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up | |
K8ServiceCheck |
||
configureDefaultRoute |
Flag to configure default route in Egress Gateway. Configure this flag when sbiRoutingConfigMode and routeConfigMode are configured as REST | |
sbiRoutingConfigMode |
Mode of operation for sbiRouting. Possible values are HELM, REST | |
routeConfigMode |
Mode of configuration for configuring routes. Possible values are HELM, REST | |
log.level.root |
Log level for root logs | |
log.level.egress |
Log level for egress logs | |
log.level.oauth |
Log level for oauth logs | |
log.level.updateContainer |
Log level for updateContainer logs | |
log.level.hook |
Log level for hook logs | |
service.ssl.keyType |
The selected key type. | |
service.ssl.tlsVersion |
The TLS version. | |
service.ssl.privateKey.k8SecretName |
Secret name that contains NEF Ingress Gateway Private Key |
Default Value: TLSv1.2 |
service.ssl.privateKey.k8NameSpace |
Namespace in which k8SecretName is present | |
service.ssl.privateKey.rsa.filename |
NEF's Private Key (RSA type) file name | |
service.ssl.privateKey.ecdsa.filename |
NEF Egress Gateway Private Key (ecdsa type) file name | |
service.certificate.k8SecretName |
Secret name that contains NEF Egress Gateway certificate for HTTPS | |
service.certificate.k8NameSpace |
Namespace in which k8SecretName is present | |
service.certificate.rsa.filename |
NEF Egress Gateway Certificate (RSA type) file name | |
service.certificate.ecdsa.filename |
NEF Egress Gateway Certificate (ECDSA type) file name | |
service.caBundle.k8SecretName |
Secret name that contains NEF Egress Gateway's CA details for HTTPS | |
service.caBundle.k8NameSpace |
Namespace that contains NEF Egress Gateway's CA details for HTTPS | |
expgw-apirouter.caBundle.filename |
NEF Egress Gateway's CA bundle filename | |
service.keyStorePassword.k8SecretName |
Secret name that contains keyStorePassword | |
service.keyStorePassword.k8NameSpace |
Namespace in which NEF Egress Gateway's keystore password is present | |
service.keyStorePassword.filename |
NEF Egress Gateway's Key Store password Filename | |
service.trustStorePassword.k8SecretName |
Secret name that contains trustStorePassword | |
service.trustStorePassword.k8NameSpace |
Namespace in which trustStorePassword is present | |
service.trustStorePassword.filename |
NEF Egress Gateway's trustStorePassword Filename | O |
service.initialAlgorithm |
Initial Algorithm for HTTPS | |
service.customExtension.labels |
Custom lables that needs to be added to Egress Gateway service. | |
service.customExtension.annotations |
Custom annotations that needs to be added to Egress Gateway service. | |
deployment.customExtension.labels |
Custom lables that needs to be added to Egress Gateway deployment. | |
deployment.customExtension.annotations |
Custom annotations that needs to be added to Egress Gateway deployment. | |
globalRemoveRequestHeader |
Attribute for blocklisting (removing) a request header at global level. | |
globalRemoveResponseHeader |
Attribute for blocklisting (removing) a response header at global level. | |
resources.limits.cpu |
Maximum amount of CPU that Kubernetes allows the job resource to use. |
Default Value: 4 |
resources.limits.initServiceCpu |
Maximum amount of initServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.updateServiceCpu |
Maximum amount of updateServiceCpu that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.commonHooksCpu |
Maximum amount of common hook CPU that Kubernetes allows the job resource to use. |
Default Value: 1 |
resources.limits.memory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 4Gi |
resources.limits.initServiceMemory |
Maximum amount of memory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.limits.updateServiceMemory |
Maximum amount of updateServiceMemory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.limits.commonHooksMemory |
Maximum amount of hook service memory that Kubernetes allows the job resource to use. |
Default Value: 1Gi |
resources.requests.cpu |
The amount of CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4 |
resources.requests.initServiceCpu |
The amount of initServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.updateServiceCpu |
The amount of updateServiceCpu that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.commonHooksCpu |
The amount of hook service CPU that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1 |
resources.requests.memory |
The amount of memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 4Gi |
resources.requests.initServiceMemory |
The amount of initServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.requests.updateServiceMemory |
The amount of updateServiceMemory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.requests.commonHooksMemory |
The amount of hook service memory that the system guarantees for the resource, and Kubernetes will use this value to decide on which node to place the pod. |
Default Value: 1Gi |
resources.target.averageCpuUtil |
Target CPU utilization after which Horizontal Pod Autoscaler will be triggered. |
Default Value: 60 |
minAvailable |
Minimum available pods |
Default Value: 2 |
minReplicas |
Minimum replicas to scale to maintain an average CPU utilization |
Default Value: 2 |
maxReplicas |
Maximum replicas to scale to maintain an average CPU utilization |
Default Value: 5 |
cipherSuites |
CipherSuites for TLS1.2 | |
allowedCipherSuites |
Allowed CipherSuites for TLS1.2 | |
oauthClient.enabled |
Flag to enable accessToken request through Egress Gateway | |
nrfClientQueryEnabled |
Determines if NRF-Client Query is enabled or not (Dynamic configuration). | |
subscriptionRetryScheduledDelay |
||
httpsEnabled |
Determines if https support is enabled or not which is a deciding factor for oauth request scheme. | |
staticNrfList |
List of Static NRF instances that need to be
used for oAuth requests when
nrfClientQueryEnabled is false.
|
|
nfType |
NFType of service consumer. | |
nfInstanceId |
NF InstanceId of service consumer. | |
consumerPlmnMNC |
MNC of service Consumer | |
consumerPlmnMCC |
MCC of service Consumer | |
maxNonPrimaryNrfs |
||
apiPrefix |
||
retryErrorCodeSeriesForSameNrf |
||
retryErrorCodeSeriesForNextNrf |
||
retryExceptionListForSameNrf |
||
retryExceptionListForNextNrf |
||
connectionTimeout |
||
requestTimeout |
Specifies the response time for the server to wait before timeout. This value should be updated based on the network latency. | Default Value: 2500 ms |
attemptsForPrimaryNRFInstance |
||
attemptsForNonPrimaryNRFInstance |
||
defaultNRFInstance |
||
defaultErrorCode |
||
nrfClientConfig.serviceName |
||
nrfClientConfig.host |
||
nrfClientConfig.port |
||
nrfClientConfig.nrfClientRequestMap |
||
headerIndexing.doNotIndex |
||
requestTimeout |
Specifies the response time for the server to wait before timeout. This value should be updated based on the network latency. | Default Value: 2500 ms |
metricPrefix |
||
metricSuffix |
||
jaegerTracingEnabled |
Specifies whether to enable or disable Jaeger Tracing at Engress Gateway. | When this flag is set to true, make sure to
update all Jaeger related attributes with the correct
values.
Default Value: false |
openTelemetry.jaeger.httpExporter.host |
Specifies the host name of Jaeger Agent service | Default Value: jaeger-collector.cne-infra |
openTelemetry.jaeger.httpExporter.port |
Specifies the port of Jaeger Agent service | Default Value: 4318 |
openTelemetry.jaeger.probabilisticSampler |
Specifies the Jaeger message sampler | Default Value: 0.5 |
tolerations |
Specifies whether to enable or disable Jaeger Tracing at Engress Gateway. | When this flag is set to true, make sure to
update all Jaeger related attributes with the correct
values.
Default Value: false |
helmBasedConfigurationNodeSelectorApiVersion |
||
nodeSelector |
||
nodeKey |
||
nodeValue |
4.3 Upgrading CAPIF
Note:
In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.4.3.1 Supported Upgrade Paths
The following table lists the supported upgrade paths for CAPIF.
Table 4-12 Supported Upgrade Paths
Source CAPIF release | Target CAPIF release |
---|---|
24.1.x | 24.2.2 |
23.4.x | 24.2.2 |
Note:
CAPIF must be upgraded before upgrading NEF.4.3.2 Preupgrade Tasks
This section provides information about preupgrade tasks to be performed before upgrading CAPIF.
- Keep current
oc-capif-<current release>-custom-values.yaml
file as backup. - Update the new
oc-capif-<new release>-custom-values.yaml
file for target CAPIF release. For details on customizing this file, see Customizing CAPIF. - Before starting the upgrade, take a manual backup of CAPIF REST
based configuration. This helps if preupgrade data has to be restored.
Note:
For Rest API configuration details, see Oracle Communications Cloud Native Core, Network Exposure Function User Guide. - Install or upgrade the network policies, if applicable. For more information, see Configuring Network Policies for CAPIF.
- Before upgrading, perform sanity check using Helm test. See Performing Helm Test section for the Helm test procedure.
4.3.3 Upgrade Tasks
This section provides information about the sequence of tasks to be performed for upgrading an existing CAPIF deployment.
Note:
It is recommended to perform CAPIF upgrade in a specific order. For more information about the upgrade order, see Oracle Communications Cloud Native Core, Solution Upgrade Guide.When you attempt to upgrade an existing CAPIF deployment, the running set of containers and pods are replaced with the new set of containers and pods. However, If there is no change in the pod configuration, the running set of containers and pods are not replaced.
Note:
It is advisable to create backup of the file before changing any configuration.To configure the parameters, see Customizing CAPIF.
Execute the following command to upgrade an existing CAPIF deployment:
helm upgrade
<release> <chart> -f oc-capif-24.2.2-custom-values.yaml
Table 4-13 Parameters and Definitions during CAPIF Upgrade
Parameters | Definitions |
---|---|
<chart>
|
It is the name of the chart that is
of the form <repository/occapif> . For example: reg-1/occapif or
cne-repo/occapif |
<release>
|
It can be found in the output of
helm
list command
|
- Check the history of helm deployment:
helm history <helm_release>
- Rollback to the required revision:
helm rollback <release name> <revision number>
Note:
Perform sanity check using Helm test. See Performing Helm Test section for the Helm test procedure.4.4 Rolling Back CAPIF
Note:
- In a georedundant deployment, perform the steps explained in this section on all georedundant sites separately.
- For upgrade or rollback, a default timeout of
5 minutes is provided by Helm. In case of a slow
server, it might take more time. In such
scenarios, the upgrade status will be
pending-upgrade or
pending-rollback.
To avoid this, add
--timeout
flag according to their network speed.Example
Upgrade:helm upgrade oncef oc-nef -f experiment.yaml –-timeout=45m -n mv --dry-run
Rollback:helm rollback oncef 1 -n mv –-timeout=45m
4.4.1 Supported Rollback Paths
Table 4-14 Supported Rollback Path
Source Release | Target Release |
---|---|
24.2.2 | 24.1.x |
24.2.2 | 23.4.x |
Note:
If georedundancy feature was disabled before upgrading to 23.1.x, then rolling back to a previous version will automatically disable this feature. However, the database will still have records of theNfInstances
and NfSubscriptions
from the mated sites. For more information, contact My Oracle Support.
4.4.2 Rollback Tasks
Caution:
- No configuration should be performed during rollback.
- Do not exit from
helm rollback
command manually. After running thehelm rollback
command, it takes some time (depending upon number of pods to rollback) to rollback all of the services. In the meantime, you must not press "ctrl+c" to come out fromhelm rollback
command. It may lead to anomalous behavior.
- Run the following command to check the revision you must roll back
to:
$ helm history <release_name> -n <release_namespace>
- Run the command to rollback to the required
revision:
$ helm rollback <release_name> <revision_number> -n <release_namespace>
Note:
If the rollback is not successful, perform the troubleshooting steps mentioned in Oracle Communications Cloud Native Core, Network Exposure Function Troubleshooting Guide.4.5 Uninstalling CAPIF
This section provides information about uninstalling CAPIF.
When you uninstall a Helm chart from CAPIF deployment, it removes only the Kubernetes objects created during the installation.
4.5.1 Uninstalling CAPIF Using Helm
Prerequisite: Ensure to uninstall NEF before uninstalling CAPIF. For more information about uninstalling NEF, see Uninstalling NEF.
To uninstall CAPIF, run the following command:
helm uninstall <helm-release> --namespace <release-namespace>
where, release_name is a name provided to identify the helm deployment.
release-namespace is the name provided to identify the namespace of CAPIF deployment.
helm uninstall occapif -n occapif
Helm keeps a record of
its releases, so you can still reactivate the release after uninstalling it.--purge
parameter to helm delete
command:helm delete --purge release_name
For
example:helm delete --purge occapif
kubectl get pods -<release-namespace>
Note:
- During helm uninstallation if the Kubernetes jobs started by the helm hooks
get stuck, then you can use the following command to delete the jobs
manually:
while true; do kubectl delete jobs --all -n <release-namespace>; sleep 5;done
- If the command output displays the CAPIF resources or objects, then perform
Deleting Kubernetes Resources.
If the command output displays any Kubernetes namespace, then perform the Deleting Kubernetes Namespace.
4.5.2 Deleting Kubernetes Namespace
This section describes how to delete Kubernetes namespace where CAPIF is deployed.
Note:
Be sure before removing the namespace as it deletes all the resources or objects created in the namespace.kubectl delete namespace <occapif Kubernetes namespace>
For
example:kubectl delete namespace occapif
$ kubectl get all -n <release-namespace>
In case of successful uninstallation, no CAPIF resource is displayed in the command output.
- Run the following command to delete all the objects:
-
To delete all the Kubernetes objects:
kubectl delete all --all -n <release-namespace>
-
To delete all the configmaps:
kubectl delete cm --all -n <release-namespace>
Caution:
The command deletes all the Kubernetes objects of the specified namespace. In case you have created the RBAC resources and service accounts before the helm installation in the same namespace, and these resources are required, then do not delete them. -
- Run the following command to delete the specific resources:
kubectl delete <resource-type> <resource-name> -n <release-namespace>
- Run the following command to delete the Kubernetes namespace:
kubectl delete namespace <release-namespace>
4.5.3 Deleting Kubernetes Resources
Note:
Be sure before running the following commands as it deletes all the Kubernetes objects in the specified namespace.- Run the following command to delete all the Kubernetes
objects:
kubectl delete all --all -n <release-namespace>
- Run the following command to delete all the
configmaps:
kubectl delete cm --all -n <release-namespace>
- Run the following command to delete the specific
resources:
kubectl delete <resource-type> <resource-name> -n <release-namespace>
4.5.4 Deleting the MySQL Details
Procedure for complete removal of MySQL database and username
This section describes how to complete removal of MySQL database and users.
- Log in to the machine which has permission to access the SQL nodes of NDB cluster.
- Connect to the SQL node of NDB cluster successively.
- Log in to the MySQL prompt using root permission or user, which has permission
to drop the tables. For example:
mysql -h 127.0.0.1 -uroot -p
Note:
This command may vary from system to system, path for MySQL binary, root user and root password. After running this command, the user must enter the password specific to the user mentioned in the command. - Run the following command to clean up the
database:
$ DROP DATABASE if exists <database name>
For example, to remove a database named occapif_db, run the following command:DROP DATABASE IF EXISTS occapif_db;
- Run the following command to remove the NRF MySQL Users:
Remove CAPIF privileged user:
Example:$ DROP USER IF EXISTS <CAPIF Privileged-User Name>;
Remove CAPIF application user:$ DROP USER IF EXISTS 'priviledgeduser@'%';
Example:$ DROP USER IF EXISTS <CAPIF Application User Name>;
$ DROP USER IF EXISTS 'appuser@'%';
Caution:
Removal of users must be done on all the SQL nodes for all CAPIF sites. - Exit from MySQL prompt and SQL node.