Note:

Ingest logs to OCI Logging Analytics using Fluentd

Introduction

Use the open source data collector software, Fluentd to collect log data from your source. Install the OCI Logging Analytics Output Plugin to route the collected log data to Oracle Cloud Logging Analytics.

Note: Oracle strongly recommends that you use Oracle Cloud Management Agents for the best experience of ingesting log data into Oracle Cloud Logging Analytics. However, if that is not a possible option for your use case, only then use the OCI Logging Analytics Output Plugin for Fluentd.

In this tutorial, a Fluentd setup is used which is based on the td-agent rpm package installed on Oracle Linux, but the required steps could be similar for other distributions of Fluentd.

Fluentd has components which work together to collect the log data from the input sources, transform the logs, and route the log data to the desired output. You can install and configure the output plugin for Fluentd to ingest logs from various sources into Oracle Cloud Logging Analytics.

Description of the illustration fluentd_plugin_overview.png

Objectives

Prerequisites

Create the Fluentd Configuration File

To configure Fluentd to route the log data to Oracle Cloud Logging Analytics, edit the configuration file provided by Fluentd or td-agent and provide the information pertaining to Oracle Cloud Logging Analytics and other customizations.

The Fluentd output plugin configuration will be of the following format:

<match pattern>
@type oci-logging-analytics
 namespace                   <YOUR_OCI_TENANCY_NAMESPACE>

# Auth config file details
 config_file_location        ~/.oci/config 
 profile_name                DEFAULT

# When there is no credentials for proxy
 http_proxy                  "#{ENV['HTTP_PROXY']}"

# To provide proxy credentials
 proxy_ip                    <IP>
 proxy_port                  <port>
 proxy_username              <user>
 proxy_password              <password>

# Configuration for plugin (oci-logging-analytics) generated logs
 plugin_log_location       "#{ENV['FLUENT_OCI_LOG_LOCATION'] || '/var/log'}"
 plugin_log_level          "#{ENV['FLUENT_OCI_LOG_LEVEL'] || 'info'}"
 plugin_log_rotation       "#{ENV['FLUENT_OCI_LOG_ROTATION'] || 'daily'}"
 plugin_log_age            "#{ENV['FLUENT_OCI_LOG_AGE'] || 'weekly'}"

# Buffer Configuration
 <buffer>
       @type file
       path                                "#{ENV['FLUENT_OCI_BUFFER_PATH'] || '/var/log'}"
       flush_thread_count                  "#{ENV['FLUENT_OCI_BUFFER_FLUSH_THREAD_COUNT'] || '10'}"
       retry_wait                          "#{ENV['FLUENT_OCI_BUFFER_RETRY_WAIT'] || '2'}"                     #seconds
       retry_max_times                     "#{ENV['FLUENT_OCI_BUFFER_RETRY_MAX_TIMES'] || '10'}"
       retry_exponential_backoff_base      "#{ENV['FLUENT_OCI_BUFFER_RETRY_EXPONENTIAL_BACKOFF_BASE'] || '2'}" #seconds
       retry_forever                       true
       overflow_action                     block
       disable_chunk_backup                true
 </buffer>
	</match>

It is recommended that a secondary plugin is configured which would be used by Fluentd to dump the backup data when the output plugin continues to fail in writing the buffer chunks and exceeds the timeout threshold for retries. Also, for unrecoverable errors, Fluentd will abort the chunk immediately and move it into secondary or the backup directory. Refer to Fluentd Documentation: Secondary Output.

Output Plugin Configuration Parameters

Provide suitable values to the following parameters in the Fluentd configuration file:

Configuration parameter Description
namespace (Mandatory parameter) OCI Tenancy Namespace to which the collected log data to be uploaded
config_file_location The location of the configuration file containing OCI authentication details
profile_name OCI Config Profile Name to be used from the configuration file
http_proxy Proxy with no credentials. Example: www.proxy.com:80
proxy_ip Proxy ip details when credentials required. Example: www.proxy.com
proxy_port Proxy port details when credentials required. Example: 80
proxy_username Proxy username details
proxy_password Proxy password details when credentials required
plugin_log_location File path for Output plugin to write its own logs. Make sure that the path exists and is accessible. Default value: Working directory.
plugin_log_level Output plugin logging level: DEBUG < INFO < WARN < ERROR < FATAL < UNKNOWN. Default value: INFO.
plugin_log_rotation Output plugin log file rotation frequency: daily, weekly or monthly. Default value: daily.
plugin_log_age Output plugin log file age: daily, weekly or monthly. Default value: weekly.

If you don’t specify the parameters config_file_location and profile_name for the OCI Compute nodes, then instance_principal based authentication is used.

Buffer Configuration Parameters

In the same configuration file that you edited in the previous section, modify the buffer section and provide the following mandatory information:

Mandatory Parameter Description
@type This specifies which plugin to use as the backend. Enter file.
path The path where buffer files are stored. Make sure that the path exists and is accessible.

The following optional parameters can be included in the buffer block:

Optional Parameter Default Value Description
flush_thread_count 1 The number of threads to flush/write chunks in parallel.
retry_wait 1s Wait in seconds before the next retry to flush.
retry_max_times none This is mandatory only when retry_forever field is false.
retry_exponential_backoff_base 2 Wait in seconds before the next constant factor of exponential backoff.
retry_forever false If true, plugin will ignore retry_max_times option and retry flushing forever.
overflow_action throw_exception Possible Values: throw_exception / block / drop_oldest_chunk. Recommended Value: block.
disable_chunk_backup false When specified false, unrecoverable chunks in the backup directory will be discarded.
chunk_limit_size 8MB The max size of each chunks. The events will be written into chunks until the size of chunks become this size. Note: Irrespective of the value specified, Logging Analytics output plugin currently defaults the value to 1MB.
total_limit_size 64GB (for file) Once the total size of stored buffer reached this threshold, all append operations will fail with error (and data will be lost).
flush_interval 60s Frequency of flushing of chunks to output plugin.

For details of the possible values of the parameters, see Fluentd Documentation: Buffer Plugins.

Verify the Format of the Incoming Log Events

The incoming log events must be in a specific format so that the Fluentd plugin provided by Oracle can process the log data, chunk them, and transfer them to Oracle Cloud Logging Analytics.

View the example configuration that can be used for monitoring syslog, apache, and kafka log files at Example Input Configuration.

Source / Input Plugin Configuration

Example source configuration for syslog logs:

<source>
  @type tail
  @id in_tail_syslog
  multiline_flush_interval 5s
  path /var/log/messages*
  pos_file /var/log/messages*.log.pos
  read_from_head true
  path_key tailed_path
  tag oci.syslog
  <parse>
    @type json
  </parse>
</source>

The following parameters are mandatory to define the source block:

The following optional parameters can be included in the source block:

For information on other parameters, see Fluentd Documentation: tail.

Filter Configuration

Use these parameters to list the Logging Analytics resources that must be used to process your logs.

To ensure that the logs from your input source can be processed by the output plugin provided by Oracle, verify that the input log events conform to the prescribed format, for example, by configuring the record_transformer filter plugin to alter the format accordingly.

Tip: Note that configuring the record_transformer filter plugin is only one of the ways of including the required parameters in the incoming events. Refer to Fluentd Documentation for other methods.

Example filter configuration:

    <filter oci.kafka>
    @type record_transformer
    enable_ruby true
    <record>
        metadata KEY_VALUE_PAIRS
        entityId LOGGING_ANALYTICS_ENTITY_OCID              # If same across sources. Else keep this in individual filters
        entityType LOGGING_ANALYTICS_ENTITY_TYPE            # If same across sources. Else keep this in individual filters
        logSourceName LOGGING_ANALYTICS_SOURCENAME
        logGroupId LOGGING_ANALYTICS_LOGGROUP_OCID
        logPath "${record['tailed_path']}"
        message ${record["log"]}                            # Will assign the 'log' key value from json wrapped message to 'message' field
        tag ${tag}
    </record>
    </filter>`

Provide the following mandatory information in the filter block:

You can optionally provide the following additional parameters in the filter block:

Install the Output Plugin

Use the gem file provided by Oracle for the installation of the OCI Logging Analytics Output Plugin. The steps in this section are for the Fluentd setup based on the td-agent rpm package installed on Oracle Linux.

  1. Download the zip file fluent-plugin-oci-la-1-0-0.zip, unzip it, and store the output plugin file fluent-plugin-oci-logging-analytics-1.0.0.gem on your local host where Fluentd is set up.

  2. Install the output plugin by running the following command:

    td-agent-gem install fluent-plugin-oci-logging-analytics-1.0.0.gem
    
  3. Systemd starts td-agent with the td-agent user. Give td-agent user the access to the OCI files and folders. To run td-agent as a service, run the chown or chgrp command for the OCI Logging Analytics output plugin folders, and the .oci pem file, for example, chown td-agent [FILE].

  4. To start collecting logs in Oracle Cloud Logging Analytics, run td-agent:

    TZ=utc /etc/init.d/td-agent start
    

    You can use the log file /var/log/td-agent/td-agent.log to debug if you encounter issues during log collection or while setting up.

    To stop td-agent at any point, run the following command:

    TZ=utc /etc/init.d/td-agent stop
    

Start Viewing the Logs in Logging Analytics

Go to the Log Explorer and use the Visualize panel of Oracle Cloud Logging Analytics to view the log data in a form that helps you better understand and analyze. Based on what you want to achieve with your data set, you can select the visualization type that best suits your application.

After you create and execute a search query, you can save and share your log searches as a widget for further reuse.

You can create custom dashboards on the Dashboards page by adding the Oracle-defined widgets or the custom widgets you’ve created.

Learn More

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.