Skip Headers
Oracle® Communications Session Monitor Installation Guide
Release 3.3.80

E57620-01
Go to Documentation Home
Home
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

5 Installing Operations Monitor Probe

This chapter provides instructions for installing and configuring Session Monitor Operations Monitor probe.

Operations Monitor Probe System Requirements

The following sections describe the hardware and software requirements for Operations Monitor probe.

Hardware Requirements

Supported Servers

The following Sun servers are supported:

  • Sun Server X4-2

  • Sun Server X4-2L

Additionally, the following are minimum requirements:

  • 2 processors, each with 8 cores

  • 64 GB main memory

Supported Networking Cards

The following networking cards are supported:

  • Sun Dual Port 10 GbE PCle 2.0 Networking Card with Intel 82599 10 GbE Controller

  • Sun Quad Port GbE PCIe 2.0 Low Profile Adapter, UTP

  • Sun Dual Port GbE PCIe 2.0 Low Profile Adapter, MMF

Software Requirements

Operating System

The Operations Monitor probe requires Oracle Linux 7 operating system. Ensure that you are running Oracle Linux 7 and that the packages are up to date. To update the packages, execute:

# yum update
  

Reboot the system if the packages have been updated since the last reboot.

Dependencies

Some of the needed libraries are not available in the Oracle Linux 7 repositories. However, the libraries are made available by the EPEL (Extra Packages for Enterprise Linux) Special Interest Group from the Fedora Project.

To add their repository to your system, execute:

# curl -f -O http://www.mirrorservice.org/sites/dl.fedoraproject.org/\
pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
# rpm -ivh epel-release-7-5.noarch.rpm
  

In addition to this, it will be necessary to install the package vrb.

Required Kernel Modules

Operations Monitor probe needs direct access to the Intel network interfaces. You need to unload the normal network driver for selected ports and associate them with a different driver that allows direct access. There are two options to accomplish this.

The Linux kernel as of version 3.6 provides a module named vfio-pci which fits the needs of DPDK. However, this solution has some limitations. The alternative solution is the igb_uio driver provided by Intel. It is more versatile than the native solution but requires extra steps to set up.

Using the igb_uio Kernel Module

Verify that the igb_uio loadable kernel module is installed on your system. The following command either displays information about the installed module or informs about the absence of the module:

# modinfo igb_uio
  

If the module is not installed on your system, follow these steps to install the module:

  1. Download the Intel Data Plane Development Kit from http://dpdk.org.

    # curl -f -O http://dpdk.org/browse/dpdk/snapshot/dpdk-1.7.0.tar.gz
      
    

    Make sure to download version 1.7.0.

  2. Install the development tools on the machine.

    # yum group install Development tools
      
    
  3. Install the kernel development files.

    # yum install kernel-uek-devel
      
    
  4. Navigate to the download location of the DPDK and unpack the files.

    # tar xzf dpdk-1.7.0.tar.gz
      
    
  5. Change to the folder where the DPDK files are extracted.

    # cd dpdk-1.7.0
      
    
  6. Configure and build the module.

    # make config T=x86_64-native-linuxapp-gcc && make
      
    
  7. Install the igb_uio loadable kernel module:

    # install build/kmod/igb_uio.ko /lib/modules/’uname -r’/extra
    # depmod -a
      
    
  8. Load the kernel modules uio and igb_uio to be persistent (see "Persistent Loading of a Kernel Module".)

Using the vfio-pci Kernel Module

Verify that you are running the Red Hat compatible kernel and that it is used as the default kernel when booting using the following command:

# grub2-editenv list
  

The command should return:

saved_entry=Oracle Linux Server, with Linux 3.10...
  

If you are not running a Red Hat compatible kernel, do the following:

  1. Obtain the list of kernels currently configured on your system using the following command:

    # grep "^menuentry" /boot/grub2/grub.cfg | cut -d "'" -f2
      
    
  2. Select the line that starts with:

    Oracle Linux Server, with Linux 3.10
      
    
  3. Set the new default using the following command:

    # grub2-set-default "line picked in previous step"
      
    
  4. Add the kernel command line option (see "Adding a Kernel Command Line Option").

    intel_iommu=on
      
    
  5. Set the kernel module to be loaded when the system boots (see "Persistent Loading of a Kernel Module").

    vfio-pci
    

System Configuration

The following sections describe the system configurations.

Setting Up Huge Pages

The Operations Monitor probe needs huge pages provided by the Linux kernel. Each port or each configured sniffer (see "Section sniffer/name") needs at least 1GB of huge pages. Furthermore, a Operations Monitor probe requires a huge page size of 1GB.

For example, to set up 8GB of huge pages each of 1GB size, add the following options to your kernel command line options (see "Adding a Kernel Command Line Option"):

default_hugepagesz=1G hugepagesz=1G hugepages=8
  

To configure a different amount of memory:

  1. Replace the 8 with the desired number of huge pages.

    default_hugepagesz=1G hugepagesz=1G hugepages=8
      
    
  2. Create the following directory:

    # mkdir -p /mnt/huge
      
    
  3. Edit /etc/fstab and add the following line:

    hugetlbfs  /mnt/huge  hugetlbfs  defaults,pagesize=1G  0  0
      
    
  4. Reboot the system for the changes to apply.

Making CPUs Exclusive

This step is optional but leads to a better performance of the Operations Monitor probe.

To hide the CPUs used by Operations Monitor probe from the Linux scheduler, add the following Kernel command line option (see "Adding a Kernel Command Line Option"):

isolcpus=a,b,c,d...
  

where a,b,c,d.. are selected CPU IDs provided by the /usr/share/pld/rat/system_layout.py utility.

Note:

Do not add CPU IDs 0 and 1.

Network Connectivity

Ensure that the Operations Monitor Probe can establish a TCP connection on port 4741 or 4742 (depending on the configuration described later) of the Mediation Engine.

Additionally, the daemons rat and rapid use some ports on localhost for internal communication; therefore, it is necessary to ensure that no other services use these same ports. The port numbers used by these daemons can be obtained from their configuration files.

Installing and Configuring Operations Monitor Probe

Download the Operations Monitor probe rpm package from http://pirate.de.oracle.com/storage/tmp/palladion-9.9.9-1.x86_64.rpm. The package and its dependencies can be installed using the following command:

# yum palladion-9.9.9-1.x86_64.rpm

Adjusting Configurations in the RAT Configuration File for Your System

The default RAT configuration file is in the directory /etc/iptego/rat.conf.

You may need to adjust some of the configurations to fit your system configuration. The configuration file is divided into several sections, each containing options and possible references to other sections of the file, so be careful and make sure you write a valid configuration. A section is denoted by brackets and contains one or several assignment statements.

After adjusting the configurations you can enable the daemon using the following commands:

# systemctl enable pld-rat
# systemctl start pld-rat
  

Section dpdk

The dpdk section is denoted by:

[dpdk]
  

Table 5-1 lists and describes the entries in the dpdk section.

Table 5-1 Entries in the dpdk Section

Entry Description

mem_channels = N

Sets the number, N, of memory channels of your system.

mem_layout = X, Y

Sets the memory allocated using huge pages per memory channel. This must be a colon separated list with one entry per specified memory channel. An entry must be a multiple of 1024 including 0.

rat_cpu_id = I

Sets the CPU ID, I, to which the main thread will be pinned. CPU IDs start at 0. Use the /usr/share/pld/rat/system_layout.py utility to get an overview of the available CPUs.

The selected CPU ID should not be 0 or 1.

driver = kernel_module_name

Sets the kernel module to use. This can be either vfio-pci or igb_uio. See "Required Kernel Modules" for further information.


Ensure that the specified amount of huge pages is available on your system. The Linux kernel distributes huge pages equally across memory channels. For example, the following configuration would be valid if you set up 8 huge pages of size 1024MB on a system with 2 memory channels:

[dpdk]
mem_channels = 2
mem_layout = 2048,2048
rat_cpu_id = 3
driver = vfio-pci
  

However, the following would be invalid, since the 8 huge pages are distributed between two channels:

mem_channels = 2048,6144
  

Section sniffer/name

The sniffer section specifies a sniffer and a name for this sniffer. You can later use this name to refer to this sniffer.

The section is denoted by:

[sniffer/name]
  

For example, if port1 is the sniffer name, the section would be denoted by:

[sniffer/port1]
  

Table 5-2 lists and describes the entries in a sniffer section.

Table 5-2 Entries in the sniffer Section

Entry Description

type = dpdk

Specifies to use the Intel DPDK to access the networking cards. This entry must be set to dpdk.

port_masks = X+Y

Specifies the ports that are used by the sniffer. It can either be a single PCI ID or multiple PCI IDs combined by using a + sign. A valid PIC ID consists of 5 (lower case) hexadecimal digits with the following layout:

AA:BB.C
  

Use the /usr/share/pld/rat/system_layout.py utility for an overview of the available cards and their PCI IDs. For example, to listen only on the port with the PCI ID 88:00.0, set the entry as follows:

port_masks = 88:00.0
  

To listen on ports 88:00.0, 88:00.1 and a0:00.2 using only one sniffer, set the entry as follows:

port_masks = 88:00.0+88:00.1+a0:00.2
  

Note: Do not put white spaces between a port and the + sign and always use lower case characters for hex numbers.

disable_rtp = 0

Specifies whether media traffic should be analyzed. Setting this to 1 disables media traffic analyzing.

all_traffic_signaling = 0

Setting this to 1 passes all traffic to the signaling analyzer, regardless if it is categorized as media traffic.

Note: Enabling this entry may result in a notable decrease of performance.

rtp_filter = pcap filter expression

Specifies a filtering rule to categorize packets as media traffic. Only packets for which the filter applies will be passed to the media analyzer, except when all_traffic_signaling is enabled.

buf_size = M

Sets the buffer size for the sniffer. This number should be a power of 2 - 1. Ensure that the combined amount of buffers do not conflict with the configured memory layout. The amount of huge pages memory a sniffer requires depends on this buffer size. The exact amount a sniffer requires with a buffer size of M equals 2240 x M / 2^20.

workers = N

Sets the number of media traffic worker threads to create for this sniffer.

worker_cpus = X Y Z

Specifies the CPU IDs X,Y,Z to use for the media traffic threads. Assign a list of the length according to the configured number of threads.

filter_cpus = X

Sets the CPU ID, X, for the signaling analyzer thread.

cpu_affinity = X

Sets the CPU ID, X, for the main thread of this sniffer.

Note: Ensure that you select CPU IDs that belong to the same NUMA node that the configured port belongs to. Assign each CPU ID only once for best performance. Hyperthread cores can be used, but keep in mind that you are using hyperthreading and not real cores in that case. You must not configure ports on different NUMA nodes in a single sniffer.


Section signaling/name

There are multiple signaling sections, one for each supported protocol plus some additional. Following is a list of the valid signaling sections:

[signaling/sip]
[signaling/rudp]
[signaling/diameter]
[signaling/megaco]
[signaling/mgcp]
[signaling/enum]
[signaling/pinted]
  

Table 5-3 lists the entries in a signaling section.

Table 5-3 Entries in the signaling Sections

Entry Description

filter = pcap_filter_expression

Specifies a filtering rule that a packet has to fulfill to be categorized into the protocol type of this signaling section.

deduplication_timelimit = X

Specifies the maximal delta in which a duplication packet can be recognized.

Note: Setting this to a value larger than 0 may decrease the performance.


Section base

In the base section, you specify which sniffers you want to activate and which signaling types you want to analyze.

[base]
sniffer = <name1> <name2> ...
signaling = sip ...
  

For example, if you configured a sniffer section for a sniffer named port1 and you want to activate the sniffer, then the list of sniffer would contain port1 as follows:

[base]
sniffer = ... port1 ...
  

Valid elements of the signaling list are:

sip rudp diameter megaco mgcp enum pinted
  

They are valid only if the according signaling/name section has been configured correctly.

RAPID Configuration Files

The communication between this probe and the Mediation Engines is handled by the service pldrapid. After the service is configured, you must enable it using:

# systemctl enable pld-rapid
# systemctl start pld-rapid

Basic Configuration

Rapid's configuration file is /etc/iptego/rapid.conf. It may not be necessary to edit this file; however, you need to configure the list of Mediation Engines in the file /etc/iptego/psa/probe_me.conf, which is included in /etc/iptego/rapid.conf file.

The following example shows the configuration for a list of Mediation Engines:

[MEList]
names = me1
  
[MEList/me1]
ip = aaa.bbb.ccc.ddd
name = ME
tls = no
port = 4741
  

where aaa.bbb.ccc.ddd is the IP address of the Mediation Engine. The value of the name field is arbitrary.

In the above example configuration, the Probe connects using an unencrypted connection to the Mediation Engine. Unencrypted connections must be enabled on the Mediation Engine. For an encrypted connection, the tls field must be set to yes and the port must be set to 4742. For encrypted connections, additional configuration is necessary (see "Configuring Encrypted Communication").

If connections to more Mediation Engines are desired then further sections, say MEList/me2 and MEList/me3, have to be added for those, and they have to be referenced in the names field of the MEList section as in the following example:

To configure connections to additional Mediation Engines (for example, me2 and me3), add the Mediation Engines to the names field in the MEList section and add the corresponding MEList/me2 and MEList/me3 sections.

[MEList]
names = me1 me2 me3
  

For proper operation, a valid /etc/iptego/psa/probe_uuid.conf file is also necessary. This file is created during packet installation. Otherwise, the write_rapid_uuid.sh script can be used to perform this task.

Configuring Encrypted Communication

If encrypted (TLS) communication with one or several Mediation Engines is enabled, then you must set up appropriate certificates.

For encrypted connections, it is required that the Probe authenticate the Mediation Engine and vice versa. Therefore, both the Probe and the Mediation Engine needs a signed (possibly self-signed) certificate and corresponding secret key, as well as the certificate of the Certification Authority (CA) that signed the peer's certificate. A machine which uses a certificate signed by a CA needs the CA's certificate to build its own certificate chain.

All of the needed certificates are stored in an Oracle Wallet. The wallet must reside in a disk file whose standard location (configured in rapid.conf ) is /etc/iptego/wallet. Several tools are available from Oracle that allow the creation and manipulation of wallets. Since a wallet is a directory that contains only the file ewallet.p12 in PKCS #12 format, it is also possible to create and maintain the wallet using third-party tools.

If a password is necessary to open the wallet, then that password must be stored in a separate file whose standard location (configured in rapid.conf) is /etc/iptego/apid.key (sic!). This is a text file containing only the password.

Setting the Configurations for Packet Inspector

If you want to enable the packet inspector, you can use the systemd command:

# systemd enable pld-pinted
# systemd start pld-pinted

Note:

Running packet inspector has a massive impact on performance.

The configuration file of packet inspector is in the directory:

# /etc/iptego/pinted.conf
  

It contains the section storage denoted by:

[storage]
  

Table 5-4 lists the entries in the storage section.

Table 5-4 Entries in the storage Section

Entry Description

limit_mb = 2048

Specifies the amount of space used to save packets.

You can change this setting to adjust the amount of space used to save packets.

storage_path = path

Specifies the location where packet inspector saves the traffic to.


Common System Settings

The following sections describe common system settings.

Adding a Kernel Command Line Option

To add a kernel command line option, follow these steps:

  1. Open the file /etc/default/grub in an editor (for example, vi).

  2. Locate the line that begins with:

    GRUM_CMDLINE_LINUX
      
    

    If the line does not exist, append it to the file.

  3. Append the command line option to the end of the line inside the double quotes. For example:

    GRUB_CMDLINE_LINUX="... ... option_a"
      
    

    where option_a is the command line option you want to add.

  4. Save the file and close your editor.

  5. Generate the new grub configuration file using the following command:

    # grub2-mkconfig -o /boot/grub2/grub.cfg
      
    

The new kernel command line option will be used the next time the system boots.

Persistent Loading of a Kernel Module

To load a loadable kernel module on boot, follow these steps:

  1. Create a start up script with the following name.

    /etc/sysconfig/module/module_name.modules
      
    

    where module_name is the name of your module.

  2. Add the following to the script's content.

    #!/bin/sh
    exec /sbin/modprobe module_name
      
    
  3. Make the script executable.

    # chmod +x /etc/sysconfig/modules/module_name.modules