B OCNWDAF Slice Load and Geographical Data for Simulation

To run the script to load slices and geographical data for simulations, you need to check if you can login as described in the Docker Image section.

Prerequisite

The following example assumes that you have successfully logged into the initial script docker image using bash.

Note:

The entry point of the image's bash is the directory where the script is located, which is different than the location where the data files are placed. The references to Image Root in the following example refer to the root directory in the image.

Before running the data load script, ensure that you have the following data files generated for the geographical area.

The following directories contain the data files:


Image Root (/)
- app-data/
    - csv_data/
    - json_data/

Cell and Slice information are taken from CSV data files. They must contain value parameters in the case of cells that correspond to every entry in a 50 x 50 grid. This means it holds information for 2500 cells using the values of longitude (X) and latitude (Y) to identify each. It also contains other information in the order mentioned in the following table.

geoLatitude geoLongitude longitude trackingAreaName region latitude
Geographical latitude position for the cell Geographical longitude position for the cell Logical grid X position for the cell Tracking area code for the cell Name of the region for the cell Logical grid Y position for the cell

JSON data files mostly contain data for the same grid entries commonly obtained from a different source. In this case there's only one such data, the cell IDs for each grid entry.

The naming convention is to keep names simple for each CSV cell data file, ideally using only one word. This is because for cells, for each data file there must be a file with the same in the JSON directory but with the suffix "_ids". For example, for a file in /csv_data containing cells for Austin Texas that is named austin.csv there must be a file named austin_ids.json in the /json_data directory. It is important to note that only data for Cells are required to have a JSON data file with IDs, this is not the case for Slices data.

In the case of Slices the data file structure is much simpler. It must contain data as in the following table:

Sst Sd
Value for the sst parameter Value for the sd parameter

Note:

To run the data load script, it is required to obtain the network address of the ocn-nwdaf-configuration-service. Either the IP address or the resolvable name in the DNS (if available) for the service along with its port is required as it is used as an input parameter.

Software Requirements

This data load script requires Python 3.6.

Note:

There are no external dependencies.

Running the Data load Script

To run the script, ensure the following:
  • The prerequisites are met and the required data files are placed in their corresponding directories.
  • The data file names are readily available.
The cells_loader.py script provides a way to feed an initial dataset with information for Cells and Slices. This information is required by the application and it is custom according to the geographical area being used.

When the script is invoked without arguments by running python3 cells_loader.py, the command displays help instructions about the required parameters.

Here's a breakdown of the parameters required to run the script properly:

Table B-1 Script Parameters

Parameters Description
-f / --filename[filename] This parameter indicates the data file in the corresponding assets directory. There are two data assets directories at the same level of the script file that hold the different data types that are used, there's csv_data to put tabular data in CSV format and a json_data directory for data in JSON format. Data in CSV format will be mostly bulk information with values for many properties of either Cells or Slices data types, while data in JSON format will contain extra data for the same entries that is obtained from other sources which for the most part are generated IDs of each cell in the CSV files.
-i / --hostIp[host IP] This is the IP address of the server where the Configurator service is installed, it must be reachable from the machine that runs this script. The IP address must contain the port when appropriate.
-p / --protocol[protocol] This is the protocol used by the Configurator service. Valid values are http or https. Defaults to http.

-t / --type [load type]

The type of the data that is going to be loaded. Valid values are "1" for Cells and "2" for Slices. Defaults to 1 (Cells).

Example

To load data for an Austin TX region:

# Root
- nwdaf-pkg-22.1.0.0/
    - configuration/
        - csv_data/
            - austin_cells.csv
        - json_data/
            - austin_cells_ids.json

The configurator service is reachable from the machine running the script at the IP address 10.75.245.101 and port 30096. Therefore, to load Cells information, the script runs as follows:

$ python3 cells_loader.py -f austin_cells -i 10.75.245.101:30096 -t 1

Also, to run the script to load Slices information, the data file should exist in the required directory. In such a case, only the CSV data file is required.

# Root
- nwdaf-pkg-22.1.0.0/
    - configuration/
        - csv_data/
            - slices.csv

To run the script same as in the previous example:

$ python3 cells_loader.py -f slices -i 10.75.245.101:30096 -t 2

Note:

Here, the protocol parameter is not specified. This is because all requests to the configurator service is done using the HTTP protocol. To ensure that HTTPS is used, provide a value for the -p parameter.
$ python3 cells_loader.py -f slices -i 10.75.245.101:30096 -p https -t 2
$ python3 cells_loader.py -f slices -i 10.75.245.101:30096 -p https -t 2

When the script runs, it displays the processing information along with a progress bar to indicate the percentage of the progress on the task that is running.

$ python3 cells_loader.py -f austin_cells -i 10.75.245.101:30096
 
Started loading of 2500 cells
 
Processing 2500 cells for 3 slices. Executing 7500 requests.

If there is an error, the process fails with an error message.

$ python3 cells_loader.py -f slices -i 10.75.245.101:30096
 
Process failed!

The script displays the logging information that can be reviewed in case of an error in the file service.log. This file can be found at the same level as the script file. The file does not exist initially and is only created when the script runs for the first time.

# Root
- nwdaf-pkg-22.1.0.0/
    - configuration/
        service.log

Docker Image

To simplify the running of the script, a docker image is created with the csv_data and json_data embedded into the image.

In this case, the running of the script is almost similar to the procedure mentioned above. A container whose name is to be identified runs inside the Kubernets cluster.

$ kubectl get pods -n <namespace_name>
 
NAME                                                      READY   STATUS    RESTARTS   AGE
nwdaf-cap4c-initial-setup-script-deploy-64b8fbcd9-2vqf9   1/1     Running   0          55s

Run the following command to access the container:

$ kubectl exec -n <namespace_name> nwdaf-cap4c-initial-setup-script-deploy-65fb7565c8-prvlm -it bash
Once the container is accessed, run the following script:

Note:

Instead of using the IP address, use the following script.
$ python3 cells_loader.py -f austin_cells -I http://ocn-nwdaf-configuration-service-internal.<namespace_name>:8096 -t 1