This chapter provides instructions for installing and configuring the Oracle Communications Operations Monitor probe on Oracle Linux.
The following sections describe the hardware and software requirements for installing the Operations Monitor probe on Oracle Linux.
The following Oracle servers are supported:
Oracle Server X5-2
Oracle Server X5-2L
Additionally, the following are minimum requirements:
2 Intel processors, each with 8 cores
Intel based network card
The following networking cards are supported:
Sun Dual Port 10 GbE PCle 2.0 Networking Card with Intel 82599 10 GbE Controller
Sun Quad Port GbE PCIe 2.0 Low Profile Adapter, UTP
Sun Dual Port GbE PCIe 2.0 Low Profile Adapter, MMF
Table 5-1 lists the supported versions of Data Plane Development Kit (DPDK).
Table 5-1 DPDK Software Requirements
DPDK Version | Operations Monitor Release |
---|---|
2.0.0 |
Supported from 3.3.90.2.0. |
1.7.0 |
Supported from 3.3.70.0.0 to 3.3.90.1.0. |
The Operations Monitor probe can run on the Oracle Linux 7 operating system. Ensure that you are running Oracle Linux 7 and that the packages are up to date.
Note:
If you prefer to use the installation image to install the Operations Monitor probe, see "Installing Session Monitor" and "Configuring Session Monitor".To update the packages, execute:
yum update
Reboot the system if the packages have been updated since the last reboot.
Some of the needed libraries are not available in the Oracle Linux 7 repositories. However, the libraries are made available by the EPEL (Extra Packages for Enterprise Linux) Special Interest Group from the Fedora Project.
To add their repository to your system, execute:
curl -f -O http://www.mirrorservice.org/sites/dl.fedoraproject.org/\ pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm rpm -ivh epel-release-7-5.noarch.rpm
In addition to this, it will be necessary to install the package vrb.
Operations Monitor probe needs direct access to the Intel network interfaces. You need to unload the normal network driver for selected ports and associate them with a different driver that allows direct access. There are two options to accomplish this.
The Linux kernel as of version 3.6 provides a module named vfio-pci which fits the needs of DPDK. However, this solution has some limitations. The alternative solution is the igb_uio driver provided by Intel. It is more versatile than the native solution but requires extra steps to set up.
Using the igb_uio Kernel Module
Verify that the igb_uio loadable kernel module is installed on your system. The following command either displays information about the installed module or informs about the absence of the module:
modinfo igb_uio
If the module is not installed on your system, follow these steps to install the module:
Download the Intel Data Plane Development Kit (DPDK) from http://dpdk.org
.
curl -f -O http://dpdk.org/browse/dpdk/snapshot/dpdk-version_number.tar.gz
Where version_number is the DPDK software version required for your installed release of Operations Monitor. For more information, see "Software Requirements".
Install the development tools on the machine.
yum group install Development tools
Install the kernel development files.
yum install kernel-uek-devel
Navigate to the download location of the DPDK and unpack the files.
tar xzf dpdk-version_number.tar.gz
Change to the folder where the DPDK files are extracted.
cd dpdk-version_number
Configure and build the module.
make config T=x86_64-native-linuxapp-gcc && make
Install the igb_uio loadable kernel module:
install build/kmod/igb_uio.ko /lib/modules/$(uname -r)/extra depmod -a
Load the kernel modules uio and igb_uio to be persistent (see "Persistent Loading of a Kernel Module".)
Using the vfio-pci Kernel Module
Verify that you are running the Red Hat compatible kernel and that it is used as the default kernel when booting using the following command:
grub2-editenv list
The command should return:
saved_entry=Oracle Linux Server, with Linux 3.10...
If you are not running a Red Hat compatible kernel, do the following:
Obtain the list of kernels currently configured on your system using the following command:
grep "^menuentry" /boot/grub2/grub.cfg | cut -d "'" -f2
Select the line that starts with:
Oracle Linux Server, with Linux 3.10
Set the new default using the following command:
grub2-set-default "line picked in previous step"
Add the kernel command line option (see "Adding a Kernel Command Line Option").
intel_iommu=on
Set the kernel module to be loaded when the system boots (see "Persistent Loading of a Kernel Module").
vfio-pci
This section describes the procedure for updating software for DPDK based probes.
Ensure that you are running Oracle Linux 7 and that the packages are up to date.
To update the packages, execute:
yum update
Note:
Before updating the probes, see the Oracle Communications Session Monitor Release Notes to verify if the update package requires new packages to be installed.To update the probes:
Run the following command to check for dependencies:
# rpm -ivh ocsmxxxxxxxxxxx
If dependencies exist, you will see an error message. The message provides the following details:
[xxx-yyy] is needed by ocsmxxxxx
Note:
Download the .rpm file each time you update your system.See the Oracle Communications Session Monitor Release Notes to verify if the update requires a newer DPDK version.
If you require a newer DPDK version, see the topic "Using the igb_uio Kernel Module". Complete the procedure before moving to step 4.
If you do not require a newer DPDK version, continue to step 4.
Stop the daemons to prevent the interruption of current data processing.
To stop the daemons, enter the following command:
systemctl stop pld-rat systemctl stop pld-rapid
Upgrade the DPDK based probes package, enter:
rpm -Uvh xxxx.rpm
To restart the upgraded daemons, execute:
systemctl start pld-rapid systemctl start pld-rat
The following sections describe the system configurations.
The Operations Monitor probe needs huge pages provided by the Linux kernel. Each port or each configured sniffer (see "Section sniffer/name") needs at least 1GB of huge pages. Furthermore, the Operations Monitor probe requires a huge page size of 1GB.
For example, to set up 8GB of huge pages each of 1GB size, add the following options to your kernel command line options (see "Adding a Kernel Command Line Option"):
default_hugepagesz=1G hugepagesz=1G hugepages=8
To configure a different amount of memory:
Replace the 8 with the desired number of huge pages.
default_hugepagesz=1G hugepagesz=1G hugepages=8
Create the following directory:
mkdir -p /mnt/huge
Edit /etc/fstab and add the following line:
hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1G 0 0
Reboot the system for the changes to apply.
This step is optional but leads to a better performance of the Operations Monitor probe.
To hide the CPUs used by Operations Monitor probe from the Linux scheduler, add the following Kernel command line option (see "Adding a Kernel Command Line Option"):
isolcpus=a,b,c,d...
where a,b,c,d... are selected CPU IDs provided by the /usr/share/pld/rat/system_layout.py utility.
Note:
Do not add CPU IDs 0 and 1.Ensure that the Operations Monitor Probe can establish a TCP connection on port 4741 or 4742 (depending on the configuration described later) of the Mediation Engine.
Additionally, the daemons rat and rapid use some ports on localhost for internal communication; therefore, it is necessary to ensure that no other services use these same ports. The port numbers used by these daemons can be obtained from their configuration files.
Download the Operations Monitor probe rpm package ocsm-3.3.90.0.0.x86_64.rmp. The package and its dependencies can be installed using the following command:
yum install ocsm-3.3.90.0.0.x86_64.rpm
The default RAT configuration file is in the directory /etc/iptego/rat.conf.
You may need to adjust some of the configurations to fit your system configuration. The configuration file is divided into several sections, each containing options and possible references to other sections of the file, so be careful and make sure you write a valid configuration. A section is denoted by brackets and contains one or several assignment statements.
After adjusting the configurations you can enable the daemon using the following commands:
systemctl enable pld-rat systemctl start pld-rat
The dpdk section is denoted by:
[dpdk]
Table 5-2 lists and describes the entries in the dpdk section.
Table 5-2 Entries in the dpdk Section
Entry | Description |
---|---|
mem_channels = N |
Sets the number, N, of memory channels per processor socket. |
mem_layout = X, Y |
Sets the memory allocated using huge pages per memory channel, measured in megabytes. This must be a colon separated list with one entry per specified memory channel. An entry must be a multiple of 1024 including 0. |
rat_cpu_id = I |
Sets the CPU ID, I, to which the main thread will be pinned. CPU IDs start at 0. Use the /usr/share/pld/rat/system_layout.py utility to get an overview of the available CPUs. The selected CPU ID should not be 0 or 1. |
driver = kernel_module_name |
Sets the kernel module to use. This can be either vfio-pci or igb_uio. See "Required Kernel Modules" for further information. |
Ensure that the specified amount of huge pages is available on your system. The Linux kernel distributes huge pages equally across memory channels. For example, the following configuration would be valid if you set up 8 huge pages of size 1024MB on a system with 2 memory channels:
[dpdk] mem_channels = 2 mem_layout = 2048,2048 rat_cpu_id = 3 driver = vfio-pci
However, the following would be invalid, since the 8 huge pages are distributed between two channels:
mem_channels = 2048,6144
The sniffer section specifies a sniffer and a name for this sniffer. You can later use this name to refer to this sniffer.
The section is denoted by:
[sniffer/name]
For example, if port1 is the sniffer name, the section would be denoted by:
[sniffer/port1]
Table 5-3 lists and describes the entries in a sniffer section.
Table 5-3 Entries in the sniffer Section
Entry | Description |
---|---|
type = dpdk |
Specifies to use the Intel DPDK to access the networking cards. This entry must be set to dpdk. |
port_masks = X+Y |
Specifies the ports that are used by the sniffer. It can either be a single PCI ID or multiple PCI IDs combined by using a + sign. A valid PIC ID consists of 5 (lower case) hexadecimal digits with the following layout: AA:BB.C Use the /usr/share/pld/rat/system_layout.py utility for an overview of the available cards and their PCI IDs. For example, to listen only on the port with the PCI ID 88:00.0, set the entry as follows: port_masks = 88:00.0 To listen on ports 88:00.0, 88:00.1 and a0:00.2 using only one sniffer, set the entry as follows: port_masks = 88:00.0+88:00.1+a0:00.2 Note: Do not put white spaces between a port and the + sign and always use lower case characters for hex numbers. |
disable_rtp = 0 |
Specifies whether media traffic should be analyzed. Setting this to 1 disables media traffic analyzing. |
all_traffic_signaling = 0 |
Setting this to 1 passes all traffic to the signaling analyzer, regardless if it is categorized as media traffic. Note: Enabling this entry may result in a notable decrease of performance. |
rtp_filter = pcap filter expression |
Specifies a filtering rule to categorize packets as media traffic. Only packets for which the filter applies will be passed to the media analyzer, except when all_traffic_signaling is enabled. |
buf_size = M |
Sets the buffer size for the sniffer, measured in packets. Ensure that the combined amount of buffers do not conflict with the configured memory layout. The amount of huge pages memory a sniffer requires depends on this buffer size. The amount of memory a single sniffer requires with a buffer size of M is about 2304 x M / 2^20 (internal paddings and alignments on memory allocation that depend on the machine configuration may increase this amount). |
buf_size_mb = X |
Sets the buffer size for the sniffer, expressed in megabytes. The program will estimate the buffer size in packets. Usually X is the memory layout for this NUMA node divided by the number of ports*streams. This parameter overwrites buf_size. |
workers = N |
Sets the number of media traffic worker threads to create for this sniffer. |
worker_cpus = X Y Z |
Specifies the CPU IDs X,Y,Z to use for the media traffic threads. Assign a list of the length according to the configured number of workers and streams. |
filter_cpus = X Y |
Sets the CPU IDs, X, Y... for the signaling analyzer thread. Assign a list of the length according to the configured number of streams. |
filter_timer_ms = MS |
Specifies the internal buffer in MS for signaling traffic. More memory is required for larger values. Values between 1 and 20 usually perform well. |
sniffer_cpus = X Y |
Sets the CPU IDs, X, Y... for the main threads of this sniffer. Assign a list of the length according to the configured number of streams. Note: Ensure that you select CPU IDs that belong to the same NUMA node that the configured port belongs to. Assign each CPU ID only once for best performance. Hyperthread cores can be used, but keep in mind that you are using hyperthreading and not real cores in that case. You must not configure ports on different NUMA nodes in a single sniffer. |
n_streams = N |
Specifies the number of sniffing streams running in parallel. Sniffer, filter, and workers cpus are needed for each stream. buf_size or buf_size_mb must be considered as well. |
There are multiple signaling sections, one for each supported protocol plus some additional. Following is a list of the valid signaling sections:
[signaling/sip] [signaling/rudp] [signaling/diameter] [signaling/megaco] [signaling/mgcp] [signaling/enum] [signaling/pinted]
Table 5-4 lists the entries in a signaling section.
Table 5-4 Entries in the signaling Sections
Entry | Description |
---|---|
filter = pcap_filter_expression |
Specifies a filtering rule that a packet has to fulfill to be categorized into the protocol type of this signaling section. |
deduplication_timelimit = X |
Specifies the maximal delta in which a duplication packet can be recognized. Note: Setting this to a value larger than 0 may decrease the performance. |
In the base section, you specify which sniffers you want to activate and which signaling types you want to analyze.
[base] sniffer = <name1> <name2> ... signaling = sip ...
For example, if you configured a sniffer section for a sniffer named port1 and you want to activate the sniffer, then the list of sniffer would contain port1 as follows:
[base]
sniffer = ... port1 ...
Valid elements of the signaling list are:
sip rudp diameter megaco mgcp enum pinted
They are valid only if the according signaling/name section has been configured correctly.
The communication between this probe and the Mediation Engines is handled by the service pldrapid. After the service is configured, you must enable it using:
systemctl enable pld-rapid systemctl start pld-rapid
Rapid's configuration file is /etc/iptego/rapid.conf. It may not be necessary to edit this file; however, you need to configure the list of Mediation Engines in the file /etc/iptego/psa/probe_me.conf, which is included in /etc/iptego/rapid.conf file.
The following example shows the configuration for a list of Mediation Engines:
[MEList]
names = me1
[MEList/me1]
ip = aaa.bbb.ccc.ddd
name = ME
tls = no
port = 4741
where aaa.bbb.ccc.ddd is the IP address of the Mediation Engine. The value of the name field is arbitrary.
In the above example configuration, the Probe connects using an unencrypted connection to the Mediation Engine. Unencrypted connections must be enabled on the Mediation Engine. For an encrypted connection, the tls field must be set to yes and the port must be set to 4742. For encrypted connections, additional configuration is necessary (see "Configuring Encrypted Communication").
If connections to more Mediation Engines are desired then further sections, say MEList/me2 and MEList/me3, have to be added for those, and they have to be referenced in the names field of the MEList section as in the following example:
To configure connections to additional Mediation Engines (for example, me2 and me3), add the Mediation Engines to the names field in the MEList section and add the corresponding MEList/me2 and MEList/me3 sections.
[MEList] names = me1 me2 me3
For proper operation, a valid /etc/iptego/psa/probe_uuid.conf file is also necessary. This file is created during packet installation. Otherwise, the write_rapid_uuid.sh script can be used to perform this task.
If encrypted (TLS) communication with one or several Mediation Engines is enabled, then you must set up appropriate certificates.
For encrypted connections, it is required that the Probe authenticate the Mediation Engine and vice versa. Therefore, both the Probe and the Mediation Engine needs a signed (possibly self-signed) certificate and corresponding secret key, as well as the certificate of the Certification Authority (CA) that signed the peer's certificate. A machine which uses a certificate signed by a CA needs the CA's certificate to build its own certificate chain.
All of the needed certificates are stored in an Oracle Wallet. The wallet must reside in a disk file whose standard location (configured in rapid.conf) is /etc/iptego/wallet. Several tools are available from Oracle that allow the creation and manipulation of wallets. Since a wallet is a directory that contains only the file ewallet.p12 in PKCS #12 format, it is also possible to create and maintain the wallet using third-party tools.
If a password is necessary to open the wallet, then that password must be stored in a separate file whose standard location (configured in rapid.conf) is /etc/iptego/apid.key. This is a text file containing only the password.
SELinux is enabled by default for Oracle's Unbreakable Linux Kernel (UEK). When capturing network traffic with PCAP, processing issues can arise if Security Enhanced Linux (SELinux) is enabled. If you are using SELinux, before capturing packets from Oracle Communications Mediation Engine Connector, change the SELinux context file type for tcpdump in the Oracle Enterprise Linux (OEL) probe.
To change the security context for Network traffic capture, enter the following command:
chcon -t bin_t /usr/sbin/tcpdump
Storage values must be set before starting Packet Inspector.
To set storage values for Packet Inspector:
Create a new directory called pinted, in which to save your stored packets.
mkdir -p /home/pinted
Where home is the path to the directory where Packet Inspector saves data packets.
Open the /etc/iptego/pinted.conf Packet Inspector configuration file.
Search for the following section:
[storage]
Enter the storage values you require for the following entries:
limit_mb, enter the amount of space you require for saved packets.
Note:
If you use Packet Inspector for capturing and storing media, ensure that there is sufficient disk space on the Probe machine to store the media.storage_path, enter the path and directory name, which is used by Packet Inspector to save your stored data packets. The default path is /home/pinted.
Save the file.
By default, Packet Inspector is disabled. You can enable this feature by running the following command:
system enable pld-pinted
Once enabled you can start Packet Inspector with the following command:
System start pld-pinted
Note:
Running Packet Inspector can degrade system performance.The following sections describe common system settings.
To add a kernel command line option, follow these steps:
Open the /etc/default/grub file in an editor (for example, vi).
Locate the line that begins with:
GRUB_CMDLINE_LINUX
If the line does not exist, add it to the file.
Append the command line option to the end of the line inside double quotes. For example:
GRUB_CMDLINE_LINUX="... ... option_a"
where option_a is the command line option you want to add.
Save the file and close your editor.
Generate the new grub configuration file using one of the following commands:
For BIOS based systems:
grub2-mkconfig -o /boot/grub2/grub.cfg
For UEFI-based systems:
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
The new kernel command line option will be used the next time you restart the system.
To load a loadable kernel module on boot, follow these steps:
Create a start up script with the following name.
/etc/sysconfig/modules/module_name.modules
where module_name is the name of your module.
Add the following to the script's content.
#!/bin/sh /sbin/modprobe module_name
Make the script executable with the chmod command.
chmod +x /etc/sysconfig/modules/module_name.modules
This section contains optional procedures. You can perform these procedures to gather additional data about the Operations Monitor Probe on Oracle Linux. You can refer this data to troubleshoot performance issues.
You can enable periodic logging of the Operations Monitor Probe on Oracle Linux. You can use these logs to review historical system resources and the use of resources to run processes.
Periodic logging involves:
To verify if atop is installed, from the command line, enter the verification query:
rpm -q atop
If atop is not installed, see "Installing atop".
If atop is installed, see "Enabling Periodic Logging".
To install atop:
Enter the following command:
yum install atop
Verify that the installation has been successful. Enter the following command:
rpm -q atop
If the installation has been successful, you will see text verifying the installed version.
You can enable periodic logging at any desired interval.
Note:
When enabling logging, consider the disk space requirements for your long term needs.To enable periodic logging:
In the /etc/default
directory, create a new file called atop
.
Add the following lines to the file:
INTERVAL=600 LOGPATH="/var/log/atop" OUTFILE="$LOGPATH/daily.log"
Set the INTERVAL=
value to the desired frequency (in seconds).
Restart the atop process. At the command line, enter:
systemctl restart atop.service