Installing the Migration Tool

You can deploy the Migration tool as a separate Kubernetes job. To install the Migration tool, you need to edit a yaml file that has configuration details of the migration tool.

A sample nudr-migration yaml file is given below:

apiVersion: batch/v1
kind: Job
metadata:
  name: ocudr-nudr-migration      # <Please use releaseName-nudr-migration>
  namespace: ocudr                # <Use the namespace created>
spec:
  backoffLimit: 0
  template:
    metadata:
      name: ocudr-nudr-migration   # <Please use releaseName-nudr-migration>
      annotations:
        "prometheus.io/port": "9000"
        "prometheus.io/path": "/actuator/prometheus"
        "prometheus.io/scrape": "true"
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                  - 5g-udr-dev-1-k8s-node-2   
        #<Update this value as nodename of which you want to run this migration tool>
      restartPolicy: Never
      containers:
        - name: nudr-migration
          image: "ocudr-registry.us.oracle.com:5000/ocudr/nudr_migration:1.7.40"      
        # <Use the dockerregistry path for image:image tag>
          imagePullPolicy: Always
          resources:
            requests:
              cpu: "3"
              memory: "5Gi"
            limits:
              cpu: "3"
              memory: "5Gi"
          env:
            - name: MYSQL_DATABASE
              valueFrom:
                secretKeyRef:
                  name: ocudr-secrets
                  key: dbname
            - name: DATASOURCE_USERNAME
              valueFrom:
                secretKeyRef:
                  name: ocudr-secrets        
         # <Please use secrets created under UDR deployment >
                  key: dsusername             
         # < key for username mentioned in secrets yaml file under UDR deployment >
            - name: DATASOURCE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: ocudr-secrets
                  key: dspassword             
            # < key for password mentioned in secrets yaml file under UDR deployment >
            - name: DB_SERVICE_NAME
              value: "mysql-connectivity-service.occne-infra"   
            # <Default mysql-connectivity service used under occne-infra namespace>
            - name: DB_SERVICE_PORT
              value: "3306"                           
            # <Default port for mysql-connectivity service>
            - name: HIKARI_POOL_SIZE
              value: "10"
            - name: LOGGING_LEVEL_ROOT
              value: "INFO"
            - name: K8S_HOST_IP
              value: "10.75.229.65"          
            #< Update this host ip as node external ip in which you should run >
            - name: START_RANGE
              value: "72111100000001"
            - name: END_RANGE
              value: "72111100000100"
            - name: KEY_TYPE
              value: "msisdn"
            - name: DELETE_SOURCE_UDR_USER
              value: "false"
            - name: UDR_SERVICE_BASEURL
              value: "http://ocudr-ingressgateway:80"      
            # <Please use namespace-servicename:80 of ingressgateway>
            - name: HTTP_FAILED_RETRY_COUNT
              value: "5"                            # Target udr http retry count
            - name: DIAMETER_REALM
              value: "udr.oracle.com"               # Diameter Client Realm
            - name: DIAMETER_IDENTITY
              value: "udr.migration.oracle.com"     # Diameter Client Identity
            - name: DIAMETER_SETTING_NUM_OF_CONNECTIONS
              value: "3"                            
            # Diameter Client Number of Connections
            - name: DIAMETER_NODE_HOST
              value: "10.75.214.207"                # Diameter Server Host
            - name: DIAMETER_NODE_PORT
              value: "3868"                         # Diameter Server Port
            - name: DIAMETER_NODE_REALM
              value: "tekelec.com"                  # Diameter Server Realm
            - name: DIAMETER_NODE_IDENTITY
              value: "local.tekelec.com"            # Diameter Server Identity
            - name: APPLICATION_NAME
              value: "ocudr"
            - name: ENGINEERING_VERSION
              value: "1.8.0"
            - name: MARKETING_VERSION
              value: "1.8.0.0.0"
            - name: MICROSERVICE_NAME
              value: "ocudr-nudr-migration"
            - name: K8S_CLUSTER_NAME
              value: "ocudr"
            - name: K8S_NAMESPACE
              value: "ocudr"
            - name: K8S_NODE
              value: "5g-udr-dev-1-k8s-node-2"
To install the Migration tool:
  1. Modify the values of START_RANGE, END_RANGE and KEY_TYPE in the template yaml file. These values should be of subscriber ranges whose data you need to migrate from 4G UDR to 5G UDR.
  2. Set the value of K8S_HOST_IP parameter as an external IP Address of the node, where you want to run this tool and update the node name in affinity rules.
  3. Change the UDR_SERVICE_BASEURL with the Ingress Gateway URL using which 5G UDR is running.
  4. Ensure that the value of DB_SERVICE_NAME is same as UDR. These details are available in the Installation Process of UDR.
  5. Use client configuration details for DIAMETER_REALM and DIAMETER_IDENTITY parameters as configured in 4G UDR.
  6. Enter the number of connections you want to establish from Diameter client to 4G UDR in the DIAMETER_SETTING_NUM_OF_CONNECTIONS parameter.
  7. Enter the DIAMETER_NODES_HOST, DIAMETER_NODES_PORT, REALM and IDENTITY parameters values as 4G UDR server details.
  8. Execute the following command to create a yaml file.

    kubectl create -f <template yaml >-n <namespace>

    where, template yaml is nudr_migration.yaml and namespace is the one that 5G UDR uses.

    Example: kubectl create -f nudr_migration.yaml -n ocudr

  9. Execute the following command to check whether pod is in running state without any error:

    kubectl get pods -n <namespace>

    If the pod is up and running, it means the migration process has begun as a job and subscribers are getting processed.