2 SCP Installation
This chapter explains the installation procedure of SCP.
Prerequisites
Following are the prerequisites to install and configure the SCP:
SCP Software
Following minimum software versions must be installed before deploying the SCP:
Table 2-1 Pre-installed Software
Software | Version |
---|---|
Kubernetes | v1.15.3 and v1.18.4 |
HELM | v2.14.3 and v3.1.2 |
ASM | 1.4.6-am9 |
Note:
If case any of the above software is not installed in the CNE, then install the specified software items before proceeding.Following are the common services that needs to be deployed as per the requirement:
Table 2-2 Common Services
Software | Chart Version | Required For |
---|---|---|
elasticsearch | 7.6.1 | Logging Area |
elastic-curator | 5.5.4 | Logging Area |
elastic-exporter | 1.1.0 | Logging Area |
elastic-master | 7.6.1 | Logging Area |
logs | 3.0.0 | Logging Area |
kibana | 7.6.1 | Logging Area |
grafana | 7.0.4 | Metrics Area |
prometheus | 2.16.0 | Metrics Area |
prometheus-kube-state-metrics | 1.9.5 | Metrics Area |
prometheus-node-exporter | 0.18.1 | Metrics Area |
metallb | 0.9.3 | External IP |
metrics-server | 2.10.0 | Metric Server |
tracer | 1.14.0 | Tracing Area |
Network access
The Kubernetes cluster hosts must have network access to:
- Local docker image repository where the SCP images are available
- Local helm repository where the SCP helm charts are available
- Service FQDN of SCP must be discoverable from outside the cluster (that is, publicly exposed so that ingress messages to SCP can come from outside of Kubernetes).
Note:
All the kubectl and helm related commands used in this guide need to be executed on a system or the bastion depending on the infrastructure/deployment. It could be a client machine such as a VM, server, local desktop, and so on.Client machine requirements
- It should have network access to the helm repository and docker image repository.
- Helm repository must be configured on the client.
- It should have network access to the Kubernetes cluster.
- It should have necessary
environment settings to run the
kubectl
commands. The environment should have privileges to create a namespace in the Kubernetes cluster. - It should have the helm client
installed with the push plugin. The environment should be configured so that the
helm install
command deploys the software in the Kubernetes cluster.
SCP Images
Following are the SCP images:
Table 2-3 SCP Images
Microservices | Image |
---|---|
SCP-Worker | scp-worker |
SCPC-Pilot | scpc-pilot |
SCPC-Soothsayer | soothsayer-configuration |
SCPC-Soothsayer | soothsayer-notification |
SCPC-Soothsayer | soothsayer-subscription |
SCPC-Soothsayer | soothsayer-audit |
SCP-SDS | scp-sds |
Installation Sequence
This section provides information on prerequsites and installation procedure of SCP.
- For docker registry, refer to Docker Image Registry Configuration chapter
- For executing the below commands on Bastion Host, refer to Bastion Host Installation chapter
Installation Tasks
This section describes how to install SCP on a cloud native environment.
Downloading SCP package
- Login to MOS using the appropriate login credentials.
- Select Product & Updates tab.
- In Patch Search console select Product or Family (Advanced) tab.
- Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down.
- Select Oracle Communications Cloud Native Core Security Communication Proxy <release_number> in Release field.
- Click Search. The Patch Advanced Search Results list appears.
- Select the required patch from the list. The Patch Details window appears.
- Click on Download. File Download window appears.
- Click on the <p********_<release_number>_Tekelec>.zip file.
- Click on the zip file to download the network function patch to the system where network function must be installed.
Predeployment Configurations to Install SCP with ASM
Note:
Refer to ASM Resource for ASM related parameter information. You need to login using ASPEN credentials.- Create a namespace for SCP deployment if not already created:
kubectl create ns <scp-namespace-name>
- Follow the below steps to set the connectivity to database (DB) service:
- For VM based DB:
- Create a Headless service for DB connectivity in SCP namespace:
kubectl apply -f db-connectivity.yaml
Sample db-connectivity.yaml file:# db_service_external.yaml apiVersion: v1 kind: Endpoints metadata: name: scp-db-connectivity-service-headless namespace: <db-namespace> subsets: - addresses: - ip: <10.75.203.49> # IP Endpoint of DB service. ports: - port: 3306 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: scp-db-connectivity-service-headless namespace: <db-namespace> spec: clusterIP: None ports: - port: 3306 protocol: TCP targetPort: 3306 sessionAffinity: None type: ClusterIP --- apiVersion: v1 kind: Service metadata: name: scp-db-connectivity-service namespace: <scp-namespace> spec: externalName: scp-db-connectivity-service-headless.<db-namespace>.svc.<domain> sessionAffinity: None type: ExternalName
- Create ServiceEntry and DestinationRule for DB
connectivity
service:
kubectl apply -f db-se-dr.yaml
Sample db-se-dr.yaml file:apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: scp-db-external-se namespace: <scp-namespace> spec: exportTo: - "." hosts: - scp-db-connectivity-service-headless.<db-namespace>.svc.<domain> ports: - number: 3306 name: mysql protocol: MySQL location: MESH_EXTERNAL resolution: NONE --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: scp-db-external-dr namespace: <scp-namespace> spec: exportTo: - "." host: scp-db-connectivity-service-headless.<db-namespace>.svc.<domain> trafficPolicy: tls: mode: DISABLE
- Create a Headless service for DB connectivity in SCP namespace:
- For KubeVirt based DB:
- DB connectivity headless service is not required for KubeVirt based
deployment as DB service may be exposed as K8S service. SCP can use Kubernetes
service FQDN to connect to DB service.
Create a DestinationRule with DB FQDN to disable mTLS.
kubectl apply -f db-dr.yaml
Sample db-dr.yaml file:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: scp-db-service-dr namespace: <scp-namespace> spec: exportTo: - "." host: <db-service-fqdn>.<db-namespace>.svc.<domain> trafficPolicy: tls: mode: DISABLE
- DB connectivity headless service is not required for KubeVirt based
deployment as DB service may be exposed as K8S service. SCP can use Kubernetes
service FQDN to connect to DB service.
- For VM based DB:
- Configure access to Kubernetes API Service:
- Create a service entry in pod networking so that pods can access
kubernetes
api-server:
kubectl apply -f kube-api-se.yaml
Sample kube-api-se.yaml file:# service_entry_kubernetes.yaml apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kube-api-server namespace: <scp-namespace> spec: hosts: - kubernetes.default.svc.<domain> exportTo: - "." addresses: - <10.96.0.1> # cluster IP of kubernetes api server location: MESH_INTERNAL ports: - number: 443 name: https protocol: HTTPS resolution: NONE
- Create a service entry in pod networking so that pods can access
kubernetes
api-server:
- Set NRF connectivity by creating ServiceEntry and
DestinationRule to access external or public NRF service (not part of Service
Mesh Registry):
kubectl apply -f nrf-se-dr.yaml
Sample nrf-se-dr.yaml file:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: nrf-dr namespace: <scp-namespace> spec: exportTo: - . host: ocnrf.3gpp.oracle.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: nrf-se namespace: <scp-namespace> spec: exportTo: - . hosts: - "ocnrf.3gpp.oracle.com" ports: - number: 80 name: http2 protocol: HTTP2 location: MESH_EXTERNAL resolution: NONE
- Enable Inter-NF communication:
If Consumer and Producer NFs are not part of Service Mesh Registry, create Destination Rules and Service Entries in SCP namespace for all known call-flows to enable inter NF communication.
kubectl apply -f known-nf-se-dr.yaml
Sample known-nf-se-dr.yaml file:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: udm1-dr namespace: <scp-namespace> spec: exportTo: - . host: s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: udm1-se namespace: <scp-namespace> spec: exportTo: - . hosts: - "s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com" ports: - number: 16016 name: http2 protocol: HTTP2 location: MESH_EXTERNAL resolution: NONE
Note:
DestinationRule and ServiceEntry ASM resources also need to be created for following and/or similar cases:- If a NF is registered with callback URI(s) or notification URI(s) which are not part of Service Mesh Registry.
- If a callbackReference is used in a known call-flow and contains URI which is not part of Service Mesh Registry.
kubectl apply -f callback-uri-se-dr.yaml
Sample callback-uri-se-dr.yaml file:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: udm-callback-dr namespace: <scp-namespace> spec: exportTo: - . host: udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: udm-callback-se namespace: <scp-namespace> spec: exportTo: - . hosts: - "udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com" ports: - number: 16016 name: http2 protocol: HTTP2 location: MESH_EXTERNAL resolution: NONE
SCP Deployment Configuration with ASM
Deployment Configuration
- Create namespace label for auto sidecar injection to automatically add the
sidecars in all pods spawned in SCP namespace:
kubectl label ns <scp-namespace> istio-injection=enabled
- Create a Service Account for SCP and a role with appropriate security policies for sidecar proxies to work using the sa-role-rolebinding.yaml file.
- Map the role and service accounts by creating a Role binding (see the
sample):
kubectl apply -f sa-role-rolebinding.yaml
Sample sa-role-rolebinding.yaml file:apiVersion: v1 kind: ServiceAccount metadata: name: ocscp-release-1-7-2-scp-serviceaccount namespace: <scp-namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ocscp-release-1-7-2-scp-role namespace: <scp-namespace> rules: - apiGroups: - policy resources: - podsecuritypolicies verbs: - use resourceNames: - ocscp-restricted - apiGroups: - networking.ocscp.oracle.io resources: - virtualservices - serviceentries - gateways - envoyfilters - destinationrules - sidecars verbs: ["*"] - apiGroups: ["config.ocscp.oracle.io"] resources: ["*"] verbs: ["*"] - apiGroups: ["rbac.ocscp.oracle.io"] resources: ["*"] verbs: ["*"] - apiGroups: ["authentication.ocscp.oracle.io"] resources: ["*"] verbs: ["*"] - apiGroups: [""] resources: - pods - services verbs: ["*"] - apiGroups: - "" # "" indicates the core API group resources: - secrets - endpoints verbs: - get - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ocscp-release-1-7-2-scp-rolebinding namespace: <scp-namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ocscp-release-1-7-2-scp-role subjects: - kind: ServiceAccount name: ocscp-release-1-7-2-scp-serviceaccount namespace: <scp-namespace> - kind: Group apiGroup: rbac.authorization.k8s.io name: system:serviceaccounts:<scp-namespace> --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: ocscp-restricted spec: allowPrivilegeEscalation: false allowedCapabilities: - NET_ADMIN - NET_RAW fsGroup: rule: RunAsAny runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - '*'
- Update ocscp-custom-values-1.7.3.yaml with following
annotations. Update other values such as DB details and service account as created in
above
steps:
global: customExtension: allResources: annotations: sidecar.istio.io/inject: "\"false\"" lbDeployments: annotations: sidecar.istio.io/inject: "\"true\"" oracle.com/cnc: "\"true\"" nonlbDeployments: annotations: sidecar.istio.io/inject: "\"true\"" oracle.com/cnc: "\"true\"" scpServiceAccountName: <"ocscp-release-1-7-3-scp-serviceaccount"> clusterRoleBindingEnabled: false database: dbHost: <"scp-db-connectivity-service"> #DB Service FQDN scpc-configuration: service: type: ClusterIP scp-worker: tracingenable: false service: type: ClusterIP deployment: customExtension: annotations: sidecar.istio.io/inject: "\"false\"" # Not required for ASM release 1.6+ and must be removed. traffic.sidecar.istio.io/excludeInboundPorts: "8001"
Note:
- Sidecar inject = false annotation on all resources prevents sidecar injection on pods created by helm jobs/hooks.
- Deployment overrides re-enable auto sidecar injection on all deployments.
- scp-worker override disables auto sidecar injection for scp-worker microservice, as it is done manually in later stages. This override is only required for ASM release 1.4/1.5. If integrating with ASM 1.6+, it must be removed.
- 'oracle.com/cnc' annotation is required for integration with OSO Services.
- Jaeger tracing must be disabled, as it may interfere with SM end-to-end traces.
Manual sidecar injection
- Get Sidecar injection configuration and save it in
file:
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.config}' > inject-config.yaml
- Get ASM mesh-configuration and save it in
file:
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
- Update concurrency value to '8' in mesh-config.yaml (can be
automated using 'sed'):
concurrency: 8
- Patch SCP-Worker deployment with sidecar injection configuration and
updated mesh configuration
file:
kubectl get deployment -n <scp-namespace> <scp-worker-deployment> -o yaml | $ASM_HOME/bin/istioctl kube-inject --injectConfigFile inject-config.yaml --meshConfigFile mesh-config.yaml -f - | kubectl apply -f -
After sidecar injection scp-worker pods will show 2/2.
Installing SCP
Note:
If ingress gateway is not used, skip to Install SCP.- Unzip the release package file to the system where you want to
install the network function. You can find the SCP package as follows:
where:ReleaseName-pkg-Releasenumber.tgz
ReleaseName is a name which is used to track this installation instance.
Releasenumber is the release number.
For example, ocscp-pkg-1.7.3.0.0.tgz - Untar the OCSCP package file to get OCSCP docker image tar
file:
tar -xvzf ReleaseName-pkg-Releasenumber.tgz
The directory consists of following:- SCP Docker Images File:
tarball contains images of SCP
ocscp-images-1.7.3.tar
- Helm File: tarball
contains SCP Helm charts and templates
ocscp-1.7.3.tgz
- Helm File: tarball contains Ingress Gateway Helm
charts and templates
ocscp-ingress-gateway-1.7.7.tgz
- Ingress Gateway Docker Images File: tarball contains
images of Ingress Gateway
ocscp-ingress-gateway-images-1.7.7.tar
- Readme txt: Contains
cksum and md5sum of the tarballs
Readme.txt
- SCP Docker Images File:
tarball contains images of SCP
- Load the ocscp-images-<release_number>.tar file
into the Docker
system:
docker load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
- Verify that the image is loaded correctly by entering this
command:
docker images
- Execute the following commands to push the docker images to
docker registry:
docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
docker push <docker-repo>/<image-name>:<image-tag>
- Untar the helm files:
tar -xvf <<nfname>-pkg-<marketing-release-number>>.tgz
helm push <image_name>.tgz <helm_repo>
Note:
ocscp-ingress-gateway-1.7.7.tgz
file must be pushed, if SCP is deployed with Ingress gateway. - Create DB user and database. SCP DB User must be created for
all MySQL nodes:
- Login to mysql server
- Execute
create database <scp_dbname>;
command.Example: "
create database ocscpdb;
" -
- Create an admin user by executing the following
command:
CREATE USER 'username'@'%' IDENTIFIED BY 'password';
where username and password are MYSQL privileged user login.
For example:CREATE USER 'scpPrivilegedUsr'@'%' IDENTIFIED BY 'scpPrivilegedPasswd';
- Create an application user by executing the
following
command:
CREATE USER 'username'@'%' IDENTIFIED BY 'password';
where username and password are MYSQL application user.
For example:CREATE USER 'scpApplicationUsr'@'%' IDENTIFIED BY 'scpApplicationPasswd';
Note:
The above steps (step a and c) must be executed on all MySQL Nodes. - Create an admin user by executing the following
command:
- Grant necessary permissions
to SCP users created:
- Run the following command to grant permissions
to admin
user:
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, REFERENCES ON <scp_dbname>.* TO 'username'@'%';
For example:GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, REFERENCES ON ocscpdb.* TO 'scpPrivilegedUsr'@'%';
- Run the following command to grant permissions
to application
user:
GRANT SELECT, INSERT, DELETE, UPDATE ON <scp_dbname>.* TO 'username'@'%';
For example:GRANT SELECT, INSERT, DELETE, UPDATE ON ocscpdb.* TO 'scpApplicationUsr'@'%';
Note:
User must use<scp_dbname>
provided on mysql server in helm chart during SCP deployment. The application user can be same as privileged user. - Run the following command to grant permissions
to admin
user:
- Perform the following steps
to create a kubernetes secret for admin user and application user
respectively.
- For privileged
user:
kubectl create secret generic privilegeduser-secret --from-literal=DB_USERNAME=scpPrivilegedUsr --from-literal=DB_PASSWORD=scpPrivilegedPasswd --from-literal=DB_NAME=ocscpdb -n scpsvc
- For application
user:
kubectl create secret generic appuser-secret --from-literal=DB_USERNAME=scpApplicationUsr --from-literal=DB_PASSWORD=scpApplicationPasswd --from-literal=DB_NAME=ocscpdb -n scpsvc
Note:
Ingress gateway and SCP must be on the same namespace. - For privileged
user:
- (Optional) If you want to install SCP with Aspen Service Mesh (ASM) perform the predeployment tasks as per Predeployment Configurations to Install SCP with ASM.
- Create the ocscp-custom-values-1.7.3.yaml file with the required input parameters. To customize the file, refer to Customizing SCP chapter. If ingress gateway is deployed with SCP, refer to Customizing SCP with Ingress Gateway chapter for customizing parameters.
- (Optional) For ASM configuration, create a service entry in pod networking so that pods can access Kubernetes API Service in ocscp-custom-values-1.7.3.yaml file. Refer to SCP Deployment Configuration with ASM
- Go to the extracted SCP package as explained
in:
cd ocscp-<release_number>
- (Optional) Install ingress gateway by executing the following
command:
helm install <ocscp-ingress-gatewayreleasenumber.tgz> --name <release_name> --namespace <namespace_name> -f <ocscp_ingress_gateway_values_releasenumber.yaml>
Example:helm install ocscp-ingressgateway-1.7.7.tgz --name <release_name> --namespace <namespace_name> -f ocscp_ingress_gateway_values_1.7.7.yaml
- Install SCP using HELM tgz file by executing the following
command:
- In case of Helm 2:
helm install <helm-repo> -f <custom_values.yaml> --name <deployment_name> --namespace <namespace_name> --version <helm_version>
- In case of Helm 3:
helm install <release name> -f <custom_values.yaml> --namespace <namespace> <helm-repo>/chart_name --version <helm_version>
- In case charts are extracted and Helm 3 is used:
helm install <release name> -f <custom_values.yaml> --namespace <namespace> <chartpath>
Example:helm install ocscp-helm-repo/ocscp -f <custom values.yaml> --name ocscp --namespace scpsvc --version <helm version>
- In case of Helm 2:
- (Optional) In case SCP is installed with ASM, configure SCP-Worker sidecar proxy manually as mentioned in Manual sidecar injection section.
- Execute the following command to check the status:
- In case of Helm 2:
helm status <helm-release>
- In case of Helm
3:
helm status <release name> --namespace <namespace>
- In case of Helm 2:
- Check if all the services are deployed and running:
kubectl -n <namespace_name> get services
- Check if all the pods are up and running:
kubectl -n <namespace_name> get pods
Note: Worker and pilot status must be Running and Ready must be n/n. scpc-soothsayer status must be Running and Ready must be n/n, where n is number of containers in the pod and sds service must be up.
- (Optional) Upon successful installation, if SCP is deployed with ASM, perform the steps mentioned in Post-deployment tasks for SCP with ASM.
Post-deployment tasks for SCP with ASM
Inter-NF communication
kubectl apply -f new-nf-se-dr.yaml
Sample
new-nf-se-dr.yaml file for DestinationRule and
ServiceEntry:apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: <unique DR name for NR>
namespace: <scp-namespace>
spec:
exportTo:
- .
host: <NF-public-FQDN>
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/cert-chain.pem
privateKey: /etc/certs/key.pem
caCertificates: /etc/certs/root-cert.pem
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: <unique SE name for NR>
namespace: <scp-namespace>
spec:
exportTo:
- .
hosts:
- <NF-public-FQDN>
ports:
- number: <NF-public-port>
name: http2
protocol: HTTP2
location: MESH_EXTERNAL
resolution: NONE
Operations Services Overlay Installation
Note:
If OSO must be deployed in same namespace as SCP, make sure all deployments of OSO has the following attributes to skip sidecar injection as OSO currently does not support ASM sidecar proxy.sidecar.istio.io/inject: "\"false\""
OCCNE common services for logging
Note:
If CNE must be deployed CNE in same namespace as SCP, make sure all deployments of CNE has the following attributes to skip sidecar injection as CNE currently does not support ASM sidecar proxy.sidecar.istio.io/inject: "\"false\""
Configure NRF Details
Note:
User can configure a primary NRF and an optional secondary NRF (NRFs must have backend DB Synced).An IPV4 address of the NRF needs to be configured in case the NRF is outside the Kubernetes cluster. If the NRF is inside the Kubernetes cluster, the user can configure FQDN as well. If both IPV4 address and FQDN are provided then IPV4 Address will take precedence over FQDN.
Note:
The user needs to configure (or remove) apiPrefix parameter based on the APIPrefix supported (or not Supported) by NRF.Note:
The user needs to update the FQDN, ipv4Address and Port of NRF to point to NRF's FQDN/IP and Port. The Primary NRF profile must be always set to higher (i.e. 0), both (primary and secondary) must not be set to same priority.Configure SCP as HTTP Proxy
Consumer NFs are required to set
http_proxy/HTTP_PROXY
to scp-worker's
<FQDN or IPV4 address>:<PORT of
SCP-Worker>
for consumer NFs to route messages towards SCP.
Note:
Execute these commands from where SCP worker and FQDN can be accessed.- Test successful deployment of SCP use the below curl command:
$ curl -v -X GET --url 'http://<FQDN:PORT of SCP-Worker>/nnrf-nfm/v1/subscriptions/' --header 'Host:<FQDN:PORT of NRF>'
- Fetch the current subscription list (as a client) from NRF by sending the
request to NRF via SCP.
Example:
$ curl -v -X GET --url 'http://scp-worker.scpsvc:8000/nnrf-nfm/v1/subscriptions/' --header 'Host:ocnrf-ambassador.nrfsvc:80'