Note:

Ingest Oracle Cloud Infrastructure Logs into Third-Party SIEM Platforms using Log Shippers

Introduction

Oracle Cloud Infrastructure (OCI) is an Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) platform trusted by large-scale enterprises. It offers a comprehensive array of managed services, including hosting, storage, networking, databases, and more.

Proactively presenting security-related event logs for triage to the appropriate resources is crucial for detecting and preventing cybersecurity incidents. Many organizations rely on Security Information and Event Management (SIEM) platforms to correlate, analyze logs and alerts from relevant assets. Proper configuration of log capture, retention for the appropriate duration, and near-real-time monitoring and alerting enables security operations teams to identify issues, focus on critical information based on system tuning, and take timely action.

A best practice for ingesting OCI logs involves sending them to OCI Streaming, which is Apache Kafka-compatible, allowing third-party SIEM platforms to consume the logs as Kafka consumers. This approach reduces delays, provides resilience, and ensures retention in case of temporary issues with data consumption on the SIEM side.

However, some third-party SIEM platforms lack default connectors for consuming logs directly from OCI streams and do not natively support data consumption from Kafka topics, the widely used open-source event streaming platform, complicating the integration process. In such cases, log shippers serve as a solution to bridge this gap.

A log shipper functions as a standalone tool that collects logs from various sources and then forwards them to one or more specified destinations. To ensure seamless communication with both OCI Streaming and third-party SIEM platforms, the log shipper software should run on a machine with internet access. In this tutorial, we’ll deploy the log shipper software on a Compute Instance within OCI.

The log shipper will:

Now, let us look at the high-level representation of the solution architecture as shown in the following image.

Architecture Diagram

Note: While this solution can bridge the gap, it is advisable to consider it only as a last option if other methods are not feasible. It is important to coordinate closely with your SIEM provider to explore any native or recommended approaches first. If you decide to proceed with a log shipper, working with your SIEM provider in selecting the most suitable one will ensure better support from your SIEM provider during and after implementation, helping to tailor the setup to meet your organization’s specific needs.

There are different log shippers available and some of them are:

Objectives

Prerequisites

Note: The following tasks (Task 1 to Task 4) should be performed on the OCI end, regardless of the chosen method or log shipper.

Task 1: Configure the Logs to Capture

OCI Logging service is a highly scalable and fully managed single pane of glass for all the logs in your tenancy. OCI Logging provides access to logs from OCI resources. A log is a first-class OCI resource that stores and captures log events collected in a given context. A log group is a collection of logs stored in a compartment. Log groups are logical containers for logs. Use log groups to organize and streamline management of logs by applying Oracle Cloud Infrastructure Identity and Access Management (OCI IAM) policy or grouping logs for analysis.

To get started, enable a log for a resource. Services provide log categories for the different types of logs available for resources. For example, the OCI Object Storage service supports the following log categories for storage buckets: read and write access events. Read access events capture download events, while write access events capture write events. Each service can have different log categories for resources.

  1. Log in to the OCI Console, navigate to Observability & Management, Logging and Log Groups.

  2. Select your compartment, click Create Log Group and enter the following information.

    • Name: Enter SIEM_log_group.
    • Description (Optional): Enter the description.
    • Tags (Optional): Enter the tags.
  3. Click Create to create a new log group.

  4. Under Resources, click Logs.

  5. Click Create custom log or Enable service log as desired.

    For example, to enable write logs for an OCI Object Storage bucket, follow the steps:

    1. Click Enable Service Log.

    2. Select your resource compartment and enter Object Storage in the Search services.

    3. Click Enable Logs and select your OCI Object Storage bucket name in the Resource.

    4. Select log group (SIEM_log_group) created in Task 1.2 and Write Access Events in the Log Category. Optionally, enter SIEM_bucket_write as Log name.

    5. Click Enable to create your new OCI log.

Task 2: Create a Stream using OCI Streaming

OCI Streaming service is a real-time, serverless, Apache Kafka-compatible event streaming platform for developers and data scientists. It provides a fully managed, scalable, and durable solution for ingesting and consuming high-volume data streams in real-time such as logs. We can use OCI Streaming for any use case in which data is produced and processed continually and sequentially in a publish-subscribe messaging model.

  1. Go to the OCI Console, navigate to Analytics & AI, Messaging and Streaming.

  2. Click Create Stream to create stream.

  3. Enter the following information and click Create.

    • Name: Enter the stream name. For this tutorial, it is SIEM_Stream.
    • Stream Pool: Select existing stream pool or create a new one with public endpoint.
    • Retention (in hours): Enter the number of hours to retain messages in this stream.
    • Number of Partitions: Enter the number of partitions for the stream.
    • Total Write Rate and Total Read Rate: Enter based on the amount of data you need to process.

You can start with default values for testing. For more information, see Partitioning a Stream.

Task 3: Set up an OCI Connector Hub

OCI Connector Hub orchestrates data movement between services in OCI. OCI Connector Hub provides a central place for describing, executing and monitoring data movements between services, such as OCI Logging, OCI Object Storage, OCI Streaming, OCI Logging Analytics, and OCI Monitoring. It can also trigger OCI Functions for lightweight data processing and OCI Notifications to set up alerts.

  1. Go to the OCI Console, navigate to Observability & Management, Logging and Connectors.

  2. Click Create Connector to create the connector.

  3. Enter the following information.

    • Name: Enter SIEM_SC.
    • Description (Optional): Enter the description.
    • Compartment: Select your compartment.
    • Source: Select Logging.
    • Target: Select Streaming.
  4. Under Configure Source Connection, select a Compartment name, Log Group, and Log (log group and log created in Task 1).

  5. If you also want to send Audit Logs, click +Another Log and select the same compartment while replacing _Audit as your log group.

  6. Under Configure target, select a Compartment, and Stream (stream created in Task 2).

  7. To accept default policies, click the Create link provided for each default policy. Default policies are offered for any authorization required for this connector to access source, task, and target services.

  8. Click Create.

Task 4: Set Up an Access Control for Log Shippers to Retrieve Logs

To allow log shippers to access data from an OCI stream, create a user and grant stream-pull permissions for retrieving logs.

  1. Create an OCI user. For more information, see Managing Users.

  2. Create an OCI group named SIEM_User_Group and add the OCI user to the group. For more information, see Managing groups.

  3. Create the following OCI IAM policy.

    Allow group <SIEM_User_Group> to use stream-pull in compartment <compartment_of_stream>
    

Now, we will explore how to install log shippers and provide a few examples of how to integrate them with SIEM platforms.

Case 1: Use Filebeat as a Log Shipper

Filebeat is a lightweight shipper for forwarding and centralizing log data. Filebeat is highly extensible through the use of modules, allowing it to collect logs from sources like Apache Kafka, Amazon Web Services (AWS), and more. Written in Go, Filebeat provides a single binary file for straight-forward deployment. It excels in handling substantial data volumes while consuming minimal resources.

Install Filebeat

Filebeat can be installed on various operating systems, such as Linux and Windows, as well as on platforms like virtual machines, Docker containers, and Kubernetes clusters. In this tutorial, it is installed on an Oracle Linux 8 compute instance. For more information, see Filebeat quick start: installation and configuration.

To Install Filebeat on the compute instance designated as log shipper, follow the steps:

  1. Add the beats repository for YUM. For more information, see Repositories for APT and YUM.

  2. Download and install the public signing key.

    sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
    
  3. Create a file with a .repo extension (for example, elastic.repo) in your /etc/yum.repos.d/ directory and add the following lines:

    [elastic-8.x]
    name=Elastic repository for 8.x packages
    baseurl=https://artifacts.elastic.co/packages/8.x/yum
    gpgcheck=1
    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    autorefresh=1
    type=rpm-md
    
  4. Your repository is now ready to use. Run the following commands to install Filebeat.

    sudo yum install filebeat
    
  5. Run the following commands to configure Filebeat to start automatically during boot.

    sudo systemctl enable filebeat
    

Configure Filebeat

In the following configuration, Filebeat is set up to ingest logs from OCI streams and save them as files in the local filesystem of the log shipper (compute instance). This allows third-party SIEM platform collectors to ingest those logs by reading the files from the local file system.

  1. Replace the contents of /etc/filebeat/filebeat.yml with the following sample configuration (make sure to replace hosts, topics, username, and password with your details). Also, create a folder for storing the logs, such as /home/opc/oci_logs.

    filebeat.inputs:
    - type: kafka
      hosts: ["cell-1.streaming.us-ashburn-1.oci.oraclecloud.com:9092"]
      topics: ["SIEM_Stream"]
      group_id: "filebeat"
      username: <username>
      password: <Auth Token>
      ssl.enabled: true
      sasl.mechanism: PLAIN
    
    output.file:
      path: "/home/opc/oci_logs"
      filename: "oci_logs"
      rotate_every_kb: 5000 # 5 MB
      codec.format:
        string: '%{[@timestamp]} %{[message]}'
    
  2. Run the following command to test the configuration.

    filebeat test config
    
  3. Restart the Filebeat service after updating the configuration.

    sudo systemctl restart Filebeat
    
  4. After successful set up, you should see OCI logs as files in the /home/opc/oci_logs folder.

Example: OCI and Rapid7 InsightIDR integration using Filebeat

In this example, Rapid7 InsightIDR is configured to ingest the OCI logs saved in the local file system of the log shipper Filebeat.

  1. Install the Rapid7 Collector on the log shipper instance.

    The Rapid7 Collector collects logs and sends them to your Rapid7 InsightIDR account for processing. To install Rapid7 Collector, download the package from your Rapid7 InsightIDR account and install it on the log shipper compute instance. For more information, see Rapid7 Collector Installation and Deployment.

  2. Configure Rapid7 InsightIDR to collect data from the event source. Although Rapid7 InsightIDR comes with pre-built connectors for various cloud services, OCI is not natively supported. However, you can ingest and process raw data by following these steps:

    1. Go to Rapid7 InsightIDR, navigate to Data Collection, Setup Event Source and click Add Event Source.

    2. Click Add Raw Data and Custom Logs.

    3. Enter Name Event Source and select the Collector (compute instance).

    4. Select the Timezone that matches the location of your event source logs.

    5. Select the Collection Method as Watch Directory and use OCI logs path /home/opc/oci_logs in Local Folder.

    Rapid7 Custom logs

    Log collection will begin, and the data can be viewed in Rapid7 InsightIDR.

Case 2: Use Fluent Bit as a Log Shipper

Fluent Bit is a lightweight, high-performance log shipper, serving as an alternative to Fluentd. Fluent Bit emerged in response to the growing need for an optimal solution capable of collecting logs from numerous sources while efficiently processing and filtering them. Notably, Fluent Bit excels in resource-constrained environments such as containers or embedded systems.

To use Fluent Bit, we will define inputs, filters, outputs, and global configurations in a configuration file located at /etc/fluent-bit/fluent-bit.conf. Let us examine these components in detail:

Fluent Bit inputs and outputs plugins:

Install and Configure Fluent Bit

Fluent Bit can be installed on various operating systems, such as Linux and Windows, as well as on platforms like virtual machines, Docker containers, and Kubernetes clusters. In this tutorial, it is installed on an Oracle Linux 8 compute instance. To install Fluent Bit on the compute instance designated as log shipper, follow the steps:

  1. Create a repository file with a .repo extension (for example, fluentbit.repo) in your /etc/yum.repos.d/ directory and add the following lines. For more information, see Configure Yum.

    [fluent-bit]
    name = Fluent Bit
    baseurl = https://packages.fluentbit.io/centos/$releasever/
    gpgcheck=1
    gpgkey=https://packages.fluentbit.io/fluentbit.key
    repo_gpgcheck=1
    enabled=1
    
  2. Once the repository is configured, run the following command to install Fluent Bit.

    sudo yum install fluent-bit
    
  3. The default configuration file for Fluent Bit is located at /etc/fluent-bit/fluent-bit.conf. By default, it collects CPU usage metrics and sends the output to the standard log. You can view the outgoing data in the /var/log/messages file.

  4. To collect logs from the OCI Streaming service and send them to the standard output, configure the input as Kafka and the output as stdout. Make sure to replace Brokers, topics, username, and password with your details.

    [INPUT]
       Name        kafka
       Brokers     cell-1.streaming.us-ashburn-1.oci.oraclecloud.com:9092
       Topics      SIEM-Stream
       Format    	json
       group_id    fluent-bit
       rdkafka.sasl.username <User Name>
       rdkafka.sasl.password   <Auth token>  
       rdkafka.security.protocol   SASL_SSL
       rdkafka.sasl.mechanism     PLAIN
    
    [OUTPUT]
       Name        stdout
    

Example: OCI and Rapid7 InsightIDR integration using Fluent Bit

In this example, we will integrate OCI with Rapid7 InsightIDR by installing the Rapid7 Collector on the log shipper instance where Fluent Bit is running. Fluent Bit will consume logs from OCI Streaming using Kafka as the input and send them to a local TCP port, where the Rapid7 collector will listen for incoming data.

  1. Install the Rapid7 Collector on the existing log shipper instance.

    The Rapid7 Collector collects logs and sends them to your Rapid7 InsightIDR account for processing. To install the collector, download the package from your Rapid7 InsightIDR account and install it on the log shipper compute instance. For more information on installation steps, see Rapid7 Collector Installation and Deployment.

  2. Configure Rapid7 InsightIDR to collect data from the event source. Although Rapid7 InsightIDR comes with pre-built connectors for various cloud services, OCI is not natively supported. However, you can ingest and process raw data by following these steps:

    1. Go to Rapid7 InsightIDR, navigate to Data Collection, Setup Event Source and click Add Event Source.

    2. Click Add Raw Data and Custom Logs.

    3. Enter a Name Event Source and select the Collector (compute instance).

    4. Select the Timezone that matches the location of your event source logs.

    5. Select the Collection Method as Listen on Network Port, enter a Port Number and Protocol.

    Rapid7 Custom logs

  3. Sample Fluent Bit input and output configuration for Rapid7 InsightIDR integration. Make sure to replace Brokers, topics, username and password with your details.

    [INPUT]
       Name        kafka
       Brokers     cell-1.streaming.us-ashburn-1.oci.oraclecloud.com:9092
       Topics      SIEM-Stream
       Format    	json
       group_id    fluent-bit
       rdkafka.sasl.username <User Name>
       rdkafka.sasl.password   <Auth token>  
       rdkafka.security.protocol   SASL_SSL
       rdkafka.sasl.mechanism     PLAIN
    
    [OUTPUT]
       Name             tcp
       Match            *
       Host             127.0.0.1
       Port             5170
       Format           json_lines
    
  4. After modifying the Fluent Bit configuration, restart Fluent Bit using the following command.

    sudo systemctl restart fluent-bit
    

    Once Fluent Bit is restarted, you should see OCI logs appearing in your Rapid7 console.

Example: OCI and Datadog integration using Fluent Bit.

In this example, Fluent Bit running on the log shipper instance consumes logs from OCI streams using Kafka as the input and sends them to Datadog’s HTTP endpoint using an HTTP output.

  1. Use Datadog HTTP logging endpoints to send logs to Datadog. For more information, see Logging Endpoints.

  2. A Datadog API key is required to send logs to the Datadog HTTP endpoint. For more information, see Datadog API and Application Keys.

    To generate an API key, follow the steps:

    1. In your Datadog account, navigate to Organization Settings.

    2. Click API Keys.

    3. Click New Key, enter a name for your key, and click Create API Key.

    DataDog API Keys

  3. Sample Fluent Bit input and output configuration for Datadog integration. Make sure to replace Brokers, topics, username, password and API Key with your details.

    [INPUT]
       Name        kafka
       Brokers     cell-1.streaming.us-ashburn-1.oci.oraclecloud.com:9092
       Topics      SIEM-Stream
       Format    	json
       group_id    fluent-bit
       rdkafka.sasl.username <User Name>
       rdkafka.sasl.password   <Auth token>  
       rdkafka.security.protocol   SASL_SSL
       rdkafka.sasl.mechanism     PLAIN
    
    [OUTPUT]
       Name             http
       Match            *
       Host             http-intake.logs.us5.datadoghq.com
       Port             443
       URI              /api/v2/logs
       Header           DD-API-KEY <API-KEY>
       Format           json
       Json_date_key    timestamp
       Json_date_format iso8601
       tls              On
       tls.verify       Off
    
  4. After modifying the Fluent Bit configuration, restart Fluent Bit.

    sudo systemctl restart fluent-bit
    

    You should now see OCI logs in your Datadog account.

Next Steps

This tutorial demonstrated how to ingest OCI logs into third-party SIEM platforms using log shippers. While there are various log shippers available and multiple ways to integrate OCI with third-party SIEM platforms using them, it is essential to choose the right log shipper and integration method by carefully evaluating the input and output options supported by each log shipper. Make sure to coordinate with your SIEM provider to ensure the solution aligns with your specific environment and requirements.

Acknowledgments

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.