2 Upgrading OCCNE
The upgrade procedures in
this document explain how to setup and perform an upgrade on the Oracle Communications
Cloud Native Environment (OCCNE) environment. The upgrade includes the OL7 base image,
Kubernetes, and the common services.
Prerequisites
Following are the prerequisites for upgrading OCCNE:
- The Preserving MetalLB Configuration During Upgrade procedure must be executed prior to upgrade.
- While upgrading MYSQL NDB on the second site the Mate Site DB
Replication Service Load Balancer IP must be provided as the configuration
parameter for the geo-replication process to continue. Login to Bastion Host of
the first site and execute the following command to retrieve DB Replication
Service Load Balancer
IP.
$ kubectl get svc --namespace=occne-infra | grep replication
Example:In the above example IPv4: 10.75.182.88 is the Mate Site DB Replication Service Load Balancer IP.$ kubectl get svc --namespace=occne-infra | grep replication occne-db-replication-svc LoadBalancer 10.233.3.117 10.75.182.88 80:32496/TCP 2m8s
- The customer central repository should be updated with the current OCCNE Images for 1.6.0 and any RPMs and binaries should be updated to the latest versions.
- Copy V987059-01.zip (MySQL Cluster Manager 1.4.8+Cluster TAR for Oracle Linux / RHEL 7 x86 (64bit), 557.6 MB downloaded from OSDC) to "/var/occne" directory on bastion host.
- Ensure that cluster is in healthy state by checking that all the pods are ready
and running. Execute the following command and verify that all pods are in
completed/running status. The pods from the list should have status as
Running and READY value set to x/x (or) as
Completed and READY value set to
0/x.
kubectl get pods --all-namespaces
Example:
NAMESPACE NAME READY STATUS RESTARTS AGE cert-manager cert-manager-77fb98dc45-7ch6b 1/1 Running 0 8d kube-system calico-kube-controllers-7df59b474d-r4f7z 1/1 Running 0 8d kube-system calico-node-6cvhp 1/1 Running 1 8d ... ... ... occne-infra occne-elastic-elasticsearch-client-0 1/1 Running 0 6d8h occne-infra occne-elastic-elasticsearch-client-1 1/1 Running 0 6d8h occne-infra occne-elastic-elasticsearch-client-2 1/1 Running 0 6d8h ... ... ...
Pre-upgrade Procedures
Following is the pre-upgrade procedure for OCCNE:
- All NFs must be upgraded before OCCNE upgrade. Execute Installing Network Functions procedure for all NFs that have upgrades available. This procedure includes steps to update NF-specific alerts.
- The below procedure needs to be executed to track and save the changes which
can be reapplied after upgrade to keep the SNMP running after upgrade:
- If trap receiver (snmp.destination) for occne-snmp-notifier is
modified after installation then the IP details must be saved so
that after upgrade it can be
reassigned.
$ kubectl get deployment occne-snmp-notifier -n occne-infra -o yaml
Search for
- --snmp.destination=<trap receiver ip address>:162
. Copy the IP and save it for future reference. - Execute the following command to determine whether multiple SNMP
notifiers are configured or
not:
$ kubectl get pods --all-namespaces | grep snmp occne-infra occne-snmp-notifier-1-f4d4876c7-hxnkb 1/1 Running 0 44h occne-infra occne-snmp-notifier-6b99997bfd-r59t7 1/1 Running 0 43h
- If multiple SNMP notifiers are created then alert manager configmap
must be saved before upgrade, so that the config can be reapplied
after upgrade by following the post upgrade
steps:
$ kubectl get configmap occne-prometheus-alertmanager -n occne-infra -o yaml apiVersion: v1 data: alertmanager.yml: | global: {} receivers: - name: default-receiver webhook_configs: - url: http://occne-snmp-notifier:9464/alerts - name: test-receiver-1 webhook_configs: - url: http://occne-snmp-notifier-1:9465/alerts route: group_interval: 5m group_wait: 10s receiver: default-receiver repeat_interval: 3h routes: - receiver: default-receiver group_interval: 1m group_wait: 10s repeat_interval: 9y group_by: [instance, alertname, severity] continue: true - receiver: test-receiver-1 group_interval: 1m group_wait: 10s repeat_interval: 9y group_by: [instance, alertname, severity] continue: true kind: ConfigMap
- If trap receiver (snmp.destination) for occne-snmp-notifier is
modified after installation then the IP details must be saved so
that after upgrade it can be
reassigned.
- Any existing customer specific dashboad(s) must be
saved to a local directory so that it can be restored after the upgrade. Log
into the Grafana GUI to backup dashboard:
- Select the dashboard to be saved.
- Go to Shared Dashboard option on the top-right side of the dashboard that needs to be saved.
- Click Export. Click Save to file to save the file in the local repository.
- Repeat these steps until all customer specific dashboards have been saved.
- Run k8s install pre-upgrade script to switch network plugin from flannel to
calico (only applicable for vCNE cluster)
- Execute below command on bastion to run k8s
getdeps:
$ docker run -it --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e 'OCCNEARGS=--extra-vars={"occne_vcne":"1"} ' winterfell:5000/occne/k8s_install:<image_tag> /getdeps/getdeps Example: $ docker run -it --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e 'OCCNEARGS=--extra-vars={"occne_vcne":"1"} ' winterfell:5000/occne/k8s_install:1.6.0 /getdeps/getdeps
- Execute below commands on bastion to fetch k8s related binaries and docker
images:
Note:
These commands can be executed as written below since the indicated environment variables have already been set on initial deployment. They can be verified by using the linux command:echo $<variable_name>
.$ /var/occne/cluster/${OCCNE_CLUSTER}/artifacts/k8s_retrieve_bin.sh http://${CENTRAL_REPO}/occne/binaries /var/www/html/occne $ /var/occne/cluster/${OCCNE_CLUSTER}/artifacts/retrieve_docker.sh winterfell:5000 ${HOSTNAME%%.*}:5000 < /var/occne/cluster/${OCCNE_CLUSTER}/artifacts/k8s_docker_images.txt
- Execute below command on bastion to trigger the network plugin upgrade from
flannel to calico:
Note:
Make sure the openstack_lbaas_floating_network_id field is set to the floating IP network ID and the openstack_lbaas_subnet_id field is set to the user specific internal network subnet ID. Openstack values can be obtained by executing the command: openstack configuration show from the openstack client.// Get values from Cloud config $ docker run -it --rm --cap-add=NET_ADMIN --network host -v /var/occne/cluster/<cluster-name>:/host -v /var/occne:/var/occne:rw -e OCCNEINV=/host/terraform/hosts -e 'OCCNEARGS=--extra-vars={"occne_vcne":"1","occne_cluster_name":"<occne_cluster_name>","occne_repo_host":"<occne_repo_host_name>","occne_repo_host_address":"<occne_repo_host_address>"} --extra-vars={"openstack_username":"<user.name>","openstack_password":"<openstack-cloud-password>","openstack_auth_url":"<openstack_auth_url>","openstack_region":"RegionOne","openstack_tenant_id":"<openstack_tenant_id>","openstack_domain_name":"LDAP","openstack_lbaas_subnet_id":"<openstack_lbaas_subnet_id>","openstack_lbaas_floating_network_id":"<openstack_lbaas_floating_network_id>","openstack_lbaas_use_octavia":"true","openstack_lbaas_method":"ROUND_ROBIN","openstack_lbaas_enabled":true} ' winterfell:5000/occne/k8s_install:<image_tag> /upgrade/pre-upgrade.sh Example: docker run -it --rm --cap-add=NET_ADMIN --network host -v /var/occne/cluster/<cluster-name>:/host -v /var/occne:/var/occne:rw -e OCCNEINV=/host/terraform/hosts -e 'OCCNEARGS=--extra-vars={"occne_vcne":"1","occne_cluster_name":"ankit-upgrade-3","occne_repo_host":"ankit-upgrade-3-bastion-1","occne_repo_host_address":"192.168.200.9"} --extra-vars={"openstack_username":"ankit.misra","openstack_password":"{Cloud-Password}","openstack_auth_url":"http://thundercloud.us.oracle.com:5000/v3","openstack_region":"RegionOne","openstack_tenant_id":"811ef89b5f154ab0847be2f7e41117c0","openstack_domain_name":"LDAP","openstack_lbaas_subnet_id":"2787146b-56fe-4c58-bd87-086856de24a9","openstack_lbaas_floating_network_id":"e4351e3e-81e3-4a83-bdc1-dde1296690e3","openstack_lbaas_use_octavia":"true","openstack_lbaas_method":"ROUND_ROBIN","openstack_lbaas_enabled":true} ' winterfell:5000/occne/k8s_install:1.6.0 /upgrade/pre-upgrade.sh
- Wait for all pods to become ready with 1/1 and status as running. This can
be done by executing the following command from the Bastion
Host:
$ kubectl get pod -A
- Execute below command on bastion to run k8s
getdeps:
- Execute the following commands from the Bastion Host to pass the OCCNE_CLUSTER
Bastion Host environment variable to the Jenkins container. This applies to
both vCNE and Bare
Metal:
$ docker stop occne_jenkins $ docker rm -f occne_jenkins $ docker run -u root -d --name occne_jenkins --restart=always -p 8080:8080 -p 50000:50000 -e OCCNE_CLUSTER=${OCCNE_CLUSTER} -v jenkins-data:/var/jenkins_home -v {{ cluster_dir }}:{{ cluster_dir }} -v /var/run/docker.sock:/var/run/docker.sock {{ central_repo_hostname }}:{{ central_repo_docker_port}}/jenkinsci/blueocean:{{ jenkins_tag }}
Example:$ docker run -u root -d --name occne_jenkins --restart=always -p 8080:8080 -p 50000:50000 -e OCCNE_CLUSTER=${OCCNE_CLUSTER} -v jenkins-data:/var/jenkins_home -v /var/occne/cluster/john-doe:/var/occne/cluster/john-doe -v /var/run/docker.sock:/var/run/docker.sock winterfell:5000/jenkinsci/blueocean:1.19.0
- Execute the below command on bastion host to run provision getdeps for
generating script pipeline.sh in artifacts directory:
- For bare metal
cluster:
$ docker run -it --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 -e OCCNEARGS='' winterfell:5000/occne/provision:<image-tag> /getdeps/getdeps
Example:$ docker run -it --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 -e OCCNEARGS='' winterfell:5000/occne/provision:1.6.0 /getdeps/getdeps
- For vCNE
cluster:
$ docker run -it --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e 'OCCNEARGS=--extra-vars={"occne_vcne":"1"} ' winterfell:5000/occne/provision:<image-tag> /getdeps/getdeps
Example:$ docker run -it --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e 'OCCNEARGS=--extra-vars={"occne_vcne":"1"} ' winterfell:5000/occne/provision:1.6.0 /getdeps/getdeps
- For bare metal
cluster:
- Get the administrator password
from the Jenkins container to log into the Jenkins user interface running on
the Bastion Host.
- SSH to the Bastion host and run following
command to get the Jenkins docker container ID:
$ docker ps | grep 'jenkins' | awk '{print $1}'
Example output:19f6e8d5639d
- Get the admin password
from the Jenkins container running as bash. Execute the following
command to run the container in bash mode:
$ docker exec -it <container id from above command> bash
- Run the following command from the Jenkins
container while in bash mode. Once complete, capture the password
for later use with user-name: admin to log in to the Jenkins GUI.
$ cat /var/jenkins_home/secrets/initialAdminPassword
Example output:e1b3bd78a88946f9a0a4c5bfb0e74015
- Execute the following ssh command from the
Jenkins container in bash mode after getting the bastion host IP
internal address:
Note: The Bare Metal user is admusr and the vCNE user is cloud-user.
$ ssh -t -t -i /var/occne/cluster/${OCCNE_CLUSTER}/.ssh/occne_id_rsa <user>@<bastion_host_ip_address> Example (for vCNE): $ ssh -t -t -i /var/occne/cluster/${OCCNE_CLUSTER}/.ssh/occne_id_rsa cloud-user@192.168.200.17
- After executing the SSH
command the following prompt appears:
The authenticity of host can't be established. Are you sure you want to continue connecting (yes/no)
Enter
yes
. - Exit from bash mode of the Jenkins container (that is, enter exit at the command line).
- SSH to the Bastion host and run following
command to get the Jenkins docker container ID:
- Open the Jenkins GUI in a browser window using
url, <bastion-host-ip>:8080. Login using the password from step 7c
with admin.
Note:
You may be prompted to enter proxy configurations or skip the plugin configurations. Select Skip Plugin Configuration". You will then be prompted to configure the first admin user. Creating a new admin user here is optional. Use credentials: username admin, password:admin, Full name: admin, and email address: admin@<domain.com>. The Full name and email address do not have to be valid values.You may also get the following Unlock Jenkins page. If this page is displayed, enter the password from step 7c.
- Create a job with an appropriate name after
clicking New Item from Jenkins home page. Follow the steps below:
- Click New Item on the Jenkins home page.
- Add a name (such as upgrade) and select the Pipeline option for creating the job.
- This brings up the Configure page (as displayed in step e below) with the General tab selected. If the Configure page is displayed, skip the next step and go to step e.
- Once the job is created and visible on the Jenkins home page, select Job. Select Configure.
- This brings up the Configure page and allows the user to add parameters to the configuration. Select This project is parameterized checkbox. This will display the following screen with the Add Parameter button visible. Select the Add Parameter button twice (after the first string parameter is added, the Add Parameter button will be just below the first parameter dialog box) and add two string parameters using the Select String Parameter menu item.
- Add parameters
OCCNE_CLUSTER and CENTRAL_REPO from configure
screen. Two String Parameter dialogs will appear, one for
OCCNE_CLUSTER and one for CENTRAL_REPO. Enter
values for the Default Value fields in the
OCCNE_CLUSTER dialog and CENTRAL_REPO dialog.
Figure 2-1 Jenkins UI
- Copy the following configuration to the
pipeline script section in the Configure page of the Jenkins
job by manually substituting values for <upgrade_image_version>
and <central_repo_docker_port>.
Note:
There are two examples below. Use the appropriate version depending on whether your Openstack Provider uses credentials or certificates.- For deployment on openstack non certificate
authentication
environment:
node ('master') { sh "docker run -i --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 ${CENTRAL_REPO}:<central_repo_docker_port>/occne/provision:<upgrade_image_version> cp deploy_upgrade/JenkinsFile /host/artifacts" load '/var/occne/cluster/<cluster-name>/artifacts/JenkinsFile' }
Example:node ('master') { sh "docker run -i --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 ${CENTRAL_REPO}:5000/occne/provision:1.6.0 cp deploy_upgrade/JenkinsFile /host/artifacts" load '/var/occne/cluster/occne3-john-doe/artifacts/JenkinsFile' }
- For deployment on openstack certificate
authentication
environment:
node ('master') { sh "docker run -i --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 ${CENTRAL_REPO}:<central_repo_docker_port>/occne/provision:<upgrade_image_version> cp deploy_upgrade/JenkinsFile /host/artifacts" sh "docker run -i --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 ${CENTRAL_REPO}:<central_repo_docker_port>/occne/provision:<upgrade_image_version> sed -i 's/\${env.openstack_domain_name}\\\\\\\\\\\\\"}/\${env.openstack_domain_name}\\\\\\\\\\\\\",\\\\\\\\\\\\\"openstack_cacert\\\\\\\\\\\\\":\\\\\\\\\\\\\"\\/host\\/openstack-cacert.pem\\\\\\\\\\\\\"}/g' /host/artifacts/JenkinsFile" load '/var/occne/cluster/<cluster-name>/artifacts/JenkinsFile' }
Example:node ('master') { sh "docker run -i --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 ${CENTRAL_REPO}:5000/occne/provision:1.6.0 cp deploy_upgrade/JenkinsFile /host/artifacts" sh "docker run -i --rm -v /var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1 ${CENTRAL_REPO}:5000/occne/provision:1.6.0 sed -i 's/\${env.openstack_domain_name}\\\\\\\\\\\\\"}/\${env.openstack_domain_name}\\\\\\\\\\\\\",\\\\\\\\\\\\\"openstack_cacert\\\\\\\\\\\\\":\\\\\\\\\\\\\"\\/host\\/openstack-cacert.pem\\\\\\\\\\\\\"}/g' /host/artifacts/JenkinsFile" load '/var/occne/cluster/occne3-john-doe/artifacts/JenkinsFile' }
- For deployment on openstack non certificate
authentication
environment:
- Select Apply and Save.
- Go back to the Job page and select Build with Parameters to see the new parameters added from the Jenkins desktop.
- Select Build to execute the pipeline script for this job.
Note:
This job will be aborted because it writes the Jenkins file to the Bastion Host /var/occne/cluster/<cluster_name>/artifacts directory which is not there initially. - Execute following command on Bastion host (execute this step ONLY if
upgrading to a RC build. Example: upgrading from 1.5.0 to
1.6.0-rc.5)
$ sed -i 's/1.6.0/1.6.0-<rc_version>/g' /var/occne/cluster/<cluster_name>/artifacts/JenkinsFile
Example:$ sed -i 's/1.6.0/1.6.0-rc.5/g' /var/occne/cluster/occne3-user1/artifacts/JenkinsFile
- Select Build with Parameters option to see latest Jenkins file parameters in the Jenkins desktop.
- Select Configure from the same menu and edit the Pipeline
section again. Remove or comment out the line in the script that
does the copy of the JenkinsFile: sh "docker run -i --rm -v
/var/occne/cluster/${OCCNE_CLUSTER}:/host -e ANSIBLE_NOCOLOR=1
${CENTRAL_REPO}:<central_repo_docker_port>/occne/provision:<upgrade_image_version>
cp deploy_upgrade/JenkinsFile /host/artifacts"
This must be completed for use of certificates or it should look like the following:
node ('master') { load '/var/occne/cluster/<cluster-name>/artifacts/JenkinsFile'}
Example:node ('master') { load '/var/occne/cluster/occne3-user1/artifacts/JenkinsFile'
- Click the Apply button and then Save button.
Upgrade Procedure
This section describes how to
upgrade the OCCNE.
Following is the procedure to upgrade OCCNE:
- Click the job name created in the previous step. Select the Build with Parameters option on the left top corner panel in the Jenkins GUI.
- On selecting the Build with Parameters option, there
will be a list of parameters with a description describing which values need
to be used for the Bare-Metal upgrade vs the vCNE upgrade.
Notes on adding parameters:
- If default values are not displayed they should be manually entered.
- The below are the input parameter values required for upgrading
both bare metal cluster and vCNE cluster:
- CENTRAL_REPO: The name of central repository. For example: 'winterfell'
- USER: The value should be set to admusr for upgrading a bare-metal cluster and cloud-user for upgrading a vCNE cluster.
- HOSTIP: External IP address of Bastion Host
- OCCNE_REPO_HOST: Name of the Bastion Host
- OCCNE_REPO_HOST_ADDRESS: Internal IP address of Bastion Host
- DBTIER_NDB_CLUSTER_ID: is the previous cluster_id which was used during the previous dbtier installation.
- DBTIER_REPLICATION_SVC_IP: must be set to the EXTERNAL
IP of the dbtier replication service of site 1 if
replication is enabled between site 1 and site 2 with
the assumption that site 1 was installed first. If you
are upgrading site 1 then DBTIER_REPLICATION_SVC_IP
should be left blank. If you are upgrading site 2 then
replication service ip address of site 1 can be obtained
by executing the following command from the site 1
bastion
host.
$ kubectl gete svc -n occne-infra NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE . . . occne-db-replication-svc LoadBalancer 10.233.43.244 10.75.235.61 80:32416/TCP 23h . .
- Additional input parameter values that are required for
upgrading vCNE cluster alone. The openstack specific values can
be obtained by using the openstack configuration show command on
a shell that supports the openstack client (cli).
- OPENSTACK_AUTH_URL: Openstack authentication URL. For example: 'http://thundercloud.us.oracle.com:5000/v3'
- OPENSTACK_USERNAME: Openstack user name
- OPENSTACK_PROJECT_ID: Openstack project ID
- OPENSTACK_USER_DOMAIN_NAME: Openstack user domain name
- OPENSTACK_PASSWORD: Openstack password
- OPENSTACK_REGION_NAME: Openstack region name
- After entering correct values for parameters, select Build to start the upgrade.
- Once the build has started, go to the job home page to see the live console for upgrade. This can be done in two ways: Either select console output or Open Blue Ocean as shown in the image below (Blue Ocean is recommended as it will show each stage of upgrade).
- Re-trigger the Jenkins build by repeating procedure from step
2, when build gets aborted with the below message (applicable only for bare
metal
upgrade)
Reboot is required to ensure that your system benefits from these updates. + echo 'Node being restarted due to updates, returns 255' Node being restarted due to updates, returns 255 + nohup sudo -b bash -c 'sleep 2; reboot' + echo 'restart queued' restart queued + exit 255
- Check the job progress from Blue Ocean link in the job to see each stage being executed, once upgrade is complete all the stages will be in Green .
Post-Upgrade Procedures
This section describes the post-upgrade procedure.
- The below procedure needs to be executed to revert back the changes so that SNMP
runs smoothly:
- Edit SNMP notifier to add (snmp.destination) IP
back:
$ kubectl edit deployment occne-snmp-notifier -n occne-infra Move cursor to the line: - --snmp.destination=127.0.0.1:162 Modify to the first trap receiver ip: - --snmp.destination=<trap receiver ip address>:162 The editor is vi, use the vi command :x or :wq to same the change and exit.
- If multiple SNMP notifiers were created before upgrade then alert
manager configmap needs to be reloaded with previous
configuration:
$ kubectl edit configmap occne-prometheus-alertmanager -n occne-infra apiVersion: v1 data: alertmanager.yml: | global: {} receivers: - name: default-receiver webhook_configs: - url: http://occne-snmp-notifier:9464/alerts - name: test-receiver-1 webhook_configs: - url: http://occne-snmp-notifier-1:9465/alerts route: group_interval: 5m group_wait: 10s receiver: default-receiver repeat_interval: 3h routes: - receiver: default-receiver group_interval: 1m group_wait: 10s repeat_interval: 9y group_by: [instance, alertname, severity] continue: true - receiver: test-receiver-1 group_interval: 1m group_wait: 10s repeat_interval: 9y group_by: [instance, alertname, severity] continue: true
- Restart Alert Manager pods for the configmap changes to take effect:
- Execute the below command to make sure both the Alert Manager
pods are
running:
$kubectl get pods -n occne-infra | grep alert occne-prometheus-alertmanager-0 2/2 Running 0 5h occne-prometheus-alertmanager-1 2/2 Running 0 5h
- Execute the below command to delete the first Alert manager Pod.
The pod will be recreated automatically after
delete:
$kubectl delete pod occne-prometheus-alertmanager-0 -n occne-infra pod "occne-prometheus-alertmanager-0" deleted
- Execute the below command to make sure the new Alert manager pod
is up and
running:
$ kubectl get pods -n occne-infra | grep alert occne-prometheus-alertmanager-0 2/2 Running 0 5h occne-prometheus-alertmanager-1 2/2 Running 0 50s
- Execute the below command to delete the second Alert manager
Pod. The pod will be recreated automatically after
delete.
$kubectl delete pod occne-prometheus-alertmanager-1 -n occne-infra pod "occne-prometheus-alertmanager-1" deleted
- Execute the below command to make sure the new Alert manager pod
is up and
running:
$ kubectl get pods -n occne-infra | grep alert occne-prometheus-alertmanager-0 2/2 Running 0 5h occne-prometheus-alertmanager-1 2/2 Running 0 50s
- Execute the below command to make sure both the Alert Manager
pods are
running:
- Edit SNMP notifier to add (snmp.destination) IP
back:
- Execute the Configuring ZIPKIN Support in Jaeger Collector to restore Zipkin compatibility in Jaeger.
- Encrypt the Kubernetes secrets using the following command. Since secrets are
encrypted on write, performing an update on a secret will encrypt that
content:
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
Post Upgrade Health Checks
Encrypt the Kubernetes secrets using the following command. Since secrets are
encrypted on write, performing an update on a secret will encrypt that content.
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
The health check procedure is as follows:
- Run command below and verify all the pods in namespace occne-infra are in
running status. All the pods from the list should have status as Running and
READY value set to 1/1.
kubectl get pods -n occne-infra Sample output: NAME READY STATUS RESTARTS AGE occne-elastic-elasticsearch-data-0 1/1 Running 0 2d1h All the pods from the list should have status as Running and READY value set to 1/1
- Load previously configured Grafana dashboard:
- Click + icon on the left panel, click Import.
- Once in Import panel, click Upload .json file. Choose the dashboard file which is saved locally.
- Repeat the above steps for all previously configured dashboards.