Configure Sources

Log Sources define where the log files are located when you use management agent for collection, and how to parse and enrich the logs while ingesting them, irrespective of the method of ingestion. Oracle Cloud Logging Analytics has hundreds of sources for database, applications, and infrastructure of both Oracle origin and non-Oracle origin. See Oracle-Defined Sources.

You can customize the Oracle-defined content by adding your own elements to them. When Oracle updates the Oracle-defined sources, you will continue to get those updates while at the same time keeping your customizations.

If you don't find an Oracle-defined source that suits your requirement, then you can create your own. When creating a source, you will need to pick one or more parsers to parse the log file into log entries and to break the log entry into fields. You can create a custom source and use an Oracle-defined parser if there is already one that matches your log format. If there is no Oracle-defined parser for your custom source, you can create a custom parser as well. See Create a Parser.

Customize an Oracle-Defined Source

You can use the administration page for Sources to edit the existing Oracle-defined or Custom sources. You can augment the configuration of an Oracle-defined source to make it work better for your needs. The content that is provided from Oracle can be disabled and you can add your own content.

It is recommended that you edit and customize an Oracle-defined source instead of making a duplicate of the Oracle-defined source and editing it. The new duplicated and edited source will not get any of the future Oracle updates.

The combination of Oracle-defined enrichment and the customizations you make in the Oracle-defined source for your tenancy result in a new source for your tenancy. When Oracle provides an update to the source, those new updates will exist along with any augmented configuration you added.

You can enable log collection on the Management Agent by associating a source to one or more entities. See Configure New Source-Entity Association.

When you customize the Oracle-defined source, you can:

  • Override the parsers used

  • Disable Oracle-defined include and exclude patterns (for file, odl), listening port (for syslog)

  • Add your include and exclude patterns or listening port

  • Add your data filters

  • Disable Oracle-defined extended field definitions

  • Add your extended field definitions

  • Disable Oracle-defined label definitions

  • Add your label definitions

However, the name, description, and the entity type defined in the original Oracle-defined source cannot be changed.

Edit Source

Modify the existing source to customize it for your use case, but ensure that you consider the dependencies such as data filters, labels, extended fields, and other parser dependencies when you edit.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. Click menu icon next to the source entry that you want to edit and select Edit.

    The Edit Source page is displayed.

  4. Modify the source definition and click Save.

Override Oracle-defined parsers

In an Oracle-defined source, the default file parsers are already specified. If you want to override the Oracle-defined parsers used or change the order in which the parsers are applied on the logs, then follow these steps:

  1. Under File Parser > Specific Parsers > click Custom > click the Select Parsers area.

  2. Type a few characters from the name of the parser to get the list of suggestions. Select the parser.

    Repeat the selection process to include multiple parsers. You can also include parsers that you've created for this customization. Ensure to specify the parsers in the same order in which they must be applied on the logs.

Follow the above steps to customize the source if your log files are slightly different. Otherwise, create a new source.

Important: Ensure that the new parsers that you selected have the same output fields as the old parsers because of the data enrichment dependency.

Create a Source

Sources define the location of your entity's logs and how to enrich the log entries. To start continuous log collection through the OCI management agents, a source needs to be associated with one or more entities.

Note

For more specific steps to create database instance log sources, see Set Up Database Instance Monitoring.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

    The Sources page opens. Click Create Source.

  2. In the Name field, enter the name of the source.

    Optionally, add a description.

  3. From the Source Type list, select the type for the log source.
    Oracle Log Analytics supports three log source types for custom sources:
    • File: Use this type for collecting most types of logs, such as Database, Application, and Infrastructure logs.

    • Oracle Diagnostic Logging (ODL): Use this type for logs that follow the Oracle Diagnostics Logs format. These are typically used for diagnostic logs for Oracle Fusion Middleware and Oracle Applications.

    • Syslog Listener: This is typically used for network devices such as Intrusion Detection Appliance, Firewall, or other device where a management agent could not be installed.

    • Microsoft Windows: Use this type for collecting Windows Event messages. Oracle Cloud Logging Analytics can collect all historic Windows Event Log entries. It supports Windows as well as custom event channels.

      Note

      This source type does not require the field Log Parser.

    • Database: Use this source type to collect the logs stored in the tables inside of an on-premises database. With this source type, a sql query is run periodically to collect the table data as log entries.

  4. Click the Entity Type field and select the type of entity for this log source. Later, when you associate this source to an entity to enable log collection through the management agent, only entities of this type will be available for association. A source can have one or more entity types.
    • If you selected File or Oracle Diagnostic Log (ODL), then it's recommended that you select the entity type for your log source that most closely matches what you are going to monitor. Avoid selecting composite entity types like Database Cluster and instead select the entity type Database Instance because the logs are generated at the instance level.

    • If you selected the source type Syslog Listener, then select one of the variants of Host such as Host (Linux), Host (Windows), Host (AIX), or Host (Solaris) as your entity type. This is the host on which the agent is running and collecting the logs. The syslog listener is configured to receive the syslog logs from instances that might not be running on the same host. However, the agent that's installed on the syslog listener host collects those logs for which the listener is configured to collect.

      Note

      • It is recommended that a maximum of 50 senders are sent to a single management agent or syslog. To have more senders, use more management agents.

      • You must have at least 50 file handles configured per sender in the operating system to handle all the possible incoming connections that the senders may open. This is in addition to the file handles needed on the operating system for other purposes.

    • If you selected the source type Database, then the entity type is limited to the eligible database types.

    • If you selected Windows Event System source type, then the default entity type Host (Windows) is automatically selected, and cannot be changed.

  5. Click the Parser field and select the relevant parser name such as Database Audit Log Entries Format.
    You can select multiple file parsers for the log files. This is particularly helpful when a log file has entries with different syntax and can’t be parsed by a single parser.

    The order in which you add the parsers is important. When Oracle Cloud Logging Analytics reads a log file, it tries the first parser and moves to the second parser if the first one does not work. This continues until a working parser is found. Select the most common parser first for this source.

    For ODL source type, the only parser available is Oracle Diagnostic Logging Format.

    For Syslog source type, typically one of the variant parsers such as Syslog Standard Format or Syslog RFC5424 Format is used. You can also select from the Oracle-defined syslog parsers for specific network devices.

    The File Parser field isn’t available for Windows Event System source type. For the Windows Event System source type, Oracle Cloud Logging Analytics retrieves already parsed log data.

  6. Enter the following information depending on the source type:
    • Syslog source type: Specify Listener Port

    • Windows source type: Specify an event service channel name. The channel name must match with the name of the Windows event so that the agent can form the association to pick up logs.

    • Database source type: Specify SQL Statements and click Configure. Map the SQL table columns to the fields available in the menu.

    • File and ODL source types: Use the Include and Exclude tabs

      • In the Included Patterns tab, click Add to specify file name patterns for this source.

        Enter the file name pattern and description.

        You can enter parameters within braces {}, such as {AdrHome}, as a part of the file name pattern. Oracle Cloud Logging Analytics replaces these parameters in the include pattern with entity properties when the source is associated with an entity. The list of possible parameters is defined by the entity type. If you create your own entity types, you can define your own properties. When you create an entity, you will be prompted to give value for each property for that entity. You can also add your own custom properties per entity, if required. Any of these properties can be used as parameters here in the Included Patterns.

        For example for a given entity where {AdrHome} property is set to /u01/oracle/database/, the include pattern {AdrHome}/admin/logs/*.log will be replaced with /u01/oracle/database/admin/logs/*.log for this specific entity. Every other entity on the same host can have a different value for {AdrHome}, which would result in a completely different set of log files to be collected for each entity.

        You can associate a source with an entity only if the parameters that the source requires in the patterns has a value for the given entity.

      • You can use an excluded pattern when there are files in the same location that you don’t want to include in the source definition. In the Excluded Patterns tab, click Add to define patterns of log file names that must be excluded from this log source.

        For example, there’s a file with the name audit.aud in the directory that you configured as an include source (/u01/app/oracle/admin/rdbms/diag/trace/). In the same location, there’s another file with the name audit-1.aud. You can exclude any files with the pattern audit-*.aud.

  7. Add Data Filters. See Use Data Filters in Sources.
  8. Add Extended Fields. See Use Extended Fields in Sources.
  9. Add Labels. See Use Labels in Sources.
  10. Click Save.

Use Data Filters in Sources

Oracle Cloud Logging Analytics lets you mask and hide sensitive information from your log entries as well as hide entire log entries before the log data is uploaded to the cloud. Using the Data Filters tab when editing or creating a source, you can mask IP addresses, user ID, host name, and other sensitive information with replacement strings, drop specific keywords and values from a log entry, and also hide an entire log entry.

You can add data filters when creating a log source, or when editing an existing source. See Customize an Oracle-Defined Source to learn about editing existing log sources.

If the log data is sent to Oracle Cloud Logging Analytics using On-demand Upload or collection from object store, then the masking will happen on the cloud side before the data is indexed. If you are collecting logs using the Management Agent, then the logs are masked before the content leaves your premises.

Masking Log Data

Masking is the process of taking a set of existing text and replacing it with other static text to hide the original content.

If you want to mask any information such as the user name and the host name from the log entries:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.

  4. Click the Data Filters tab and click Add.

  5. Enter the mask Name, select Mask as the Type, enter the Find Expression value, and its associated Replace Expression value.

    Find Expression value can be plain text search or standard regular expression. The value that will be replaced with the Replace Expression should be surrounded by quotes ( ).

    Name Find Expression Replace Expression
    mask username User=(\S+)s+ confidential
    mask host Host=(\S+)s+ mask_host
    Note

    The syntax of the replace string should match the syntax of the string that’s being replaced. For example, a number shouldn’t be replaced with a string. An IP address of the form 123.45.67.89 should be replaced with 000.000.000.000 and not with 000.000. If the syntaxes don’t match, then the parsers may break.

  6. Click Save.

When you view the masked log entries for this log source, you’ll find that Oracle Cloud Logging Analytics has masked the values of the fields that you’ve specified.

  • User = confidential

  • Host = mask_host

Hash Masking the Log Data

When you mask the log data using the mask as described in the previous section, the masked information is replaced by a static string provided in the Replace Expression. For example, when the user name is masked with the string confidential, then the user name is always replaced with the expression confidential in the log records for every occurrence. By using hash mask, you can hash the found value with a unique hash. For example, if the log records contain multiple user names, then each user name is hashed to a unique value. So, if the string user1 is replaced with the text hash ebdkromluceaqie for every occurrence, then the hash can still be used to identify that these log entries are for the same user. However, the actual user name will not be visible.

Risk Associated: Because this is a hash, there is no way to recover the actual value of the masked original text. However, taking a hash of any string, you arrive at the same hash every time. Ensure that you consider this risk while hash masking the log data. For example, the string oracle has the md5 hash of a189c633d9995e11bf8607170ec9a4b8. Every time someone tries to create an md5 hash of the string oracle, it will always be the same value. Although you cannot take this md5 hash and reverse it back to get the original string oracle, if someone tries to guess and forward hash the value oracle, they will see that the hash matches the one in the log entry.

To apply the hash mask data filter on your log data:

  1. Go to Create Source page. For steps, see Create a Source.

  2. You can also edit a source that already exists. For steps to open an Edit Source page, see Edit Source.

  3. Click the Data Filters tab and click Add.

  4. Enter the mask Name, select Hash Mask as the Type, enter the Find Expression value, and its associated Replace Expression value.

    Name Find Expression Replace Expression
    Mask User Name User=(\S+)s+ Text Hash
    Mask Port Port=(\d+)s+ Numeric Hash
  5. Click Save.

If you want to use hash mask on a field that is string based, you can use Text or Numeric hash as a string field. But if your data field is numeric, such as an integer, long, or floating point, then you must use Numeric hash. If you do not use numeric hash, then the replace text will cause your regular expressions which depend on this value to be a number, to break. The value will also not be stored.

This replacement happens before the data is parsed. Typically, when the data must be masked, it's not clear if it is always numeric. Therefore, you must decide the type of hash while creating the mask definition.

As the result of the above example hash masking, each user name is replaced by a unique text hash, and each port number is replaced by a unique numeric hash.

You can utilize the hash mask when filtering or analyzing your log data. See Filter Logs by Hash Mask.

Dropping Specific Keywords or Values from Your Log Records

Oracle Cloud Logging Analytics lets you search for a specific keyword or value in log records and drop the matched keyword or value if that keyword exists in the log records.

Consider the following log record:

ns5xt_119131: NetScreen device_id=ns5xt_119131  [Root]system-notification-00257(traffic): start_time="2017-02-07 05:00:03" duration=4 policy_id=2 service=smtp proto=6 src zone=Untrust dst zone=mail_servers action=Permit sent=756 rcvd=756 src=192.0.2.1 dst=203.0.113.1 src_port=44796 dst_port=25 src-xlated ip=192.0.2.1 port=44796 dst-xlated ip=203.0.113.1 port=25 session_id=18738

If you want to hide the keyword device_id and its value from the log record:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.

  4. Click the Data Filters tab and click Add.

  5. Enter the filter Name, select Drop String as the Type, and enter the Find Expression value such as device_id=\S*

  6. Click Save.

When you view the log records for this source, you’ll find that Oracle Cloud Logging Analytics has dropped the keywords or values that you’ve specified.

Note

Ensure that your parser regular expression matches the log record pattern, otherwise Oracle Cloud Logging Analytics may not parse the records properly after dropping the keyword.

Note

Apart from adding data filters when creating a source, you can also edit an existing source to add data filters. See Customize an Oracle-Defined Source to learn about editing existing sources.

Dropping an Entire Log Entry Based on Specific Keywords

Oracle Cloud Logging Analytics lets you search for a specific keyword or value in log records and drop an entire log entry in a log record if that keyword exists.

Consider the following log record:

ns5xt_119131: NetScreen device_id=ns5xt_119131  [Root]system-notification-00257(traffic): start_time="2017-02-07 05:00:03" duration=4 policy_id=2 service=smtp proto=6 src zone=Untrust dst zone=mail_servers action=Permit sent=756 rcvd=756 src=198.51.100.1 dst=203.0.113.254 src_port=44796 dst_port=25 src-xlated ip=198.51.100.1 port=44796 dst-xlated ip=203.0.113.254 port=25 session_id=18738

Let’s say that you want to drop entire log entry if the keyword device_id exists in them:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.

  4. Click the Data Filters tab and click Add.

  5. Enter the filter Name, select Drop Log Entry as the Type, and enter the Find Expression value such as .*device_id=.*

    It is important that the regular expression must match the entire log entry. Using .* in front of and at the end of the regular expression ensures that it match all other text in the log entry.

  6. Click Save.

When you view the log entries for this log source, you’ll find that Oracle Cloud Logging Analytics has dropped all those log entries that contain the string device_id in them.

Note

Apart from adding data filters when creating a source, you can also edit an existing source to add data filters. See Customize an Oracle-Defined Source to learn about editing existing sources.

Use Extended Fields in Sources

The Extended Fields feature in Oracle Cloud Logging Analytics lets you extract additional fields from a log record in addition to any fields that the parser parsed.

In the source definition, a parser is chosen that can break a log file into log entries and each log entry into a set of base fields. These base fields would need to be consistent across all log entries. A base parser extracts common fields from a log record. However, if you have a requirement to extract additional fields from the log entry content, then you can use the extended fields definition. For example, the parser may be defined so that all the text at the end of the common fields of a log entry are parsed and stored into a field named Message.

When you search for logs using the updated source, values of the extended fields are displayed along with the fields extracted by the base parser.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  2. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.
  3. Click the Extended Fields tab and then click Add.
  4. A condition can be specified so the field extraction occurs only if the log entry being evaluated matches a predefined condition. To add a condition to the extended field, expand the Conditions section.
    • Reuse Existing: If required, to reuse a condition that's already defined for the log source, select the Reuse Existing radio button, and select the previously defined condition from the Condition menu.
    • Create New Condition: Enable this button if you want to define a new condition. Specify the Condition Field, Operator, and Value.

      For example, the extended field definition that extracts the value of the field Security Resource Name from the value of the field Message only if the field Service has one of the given values NetworkManager, dhclient, or dhcpd is as following:

      • Base Field: Message
      • Example Base Field Content: DHCPDISCOVER from b8:6b:23:b5:c1:bd (HOST1-LAP) via eth0
      • Extract Expression: ^DHCPDISCOVER\s+from\s+{Security Resource Name:\S+}\s+.+

      The condition for this extended field definition should be defined as following:

      • Condition Field: service
      • Condition Operator: IN
      • Condition Value: NetworkManager,dhclient,dhcpd

      In the above example, the extracted value of the field Security Resource Name is b8:6b:23:b5:c1:bd.

      To provide multiple values for the field Condition Value, key in the value and press Enter for each value.

    By adding a condition, you can reduce the regular expression processing on a log entry that is not likely to have the value that you are trying to extract. This can effectively reduce the processing time and the delay in the availability of your log entries in the Log Explorer.

  5. Select the Base Field where the value is the one that you want to further extract into the fields.

    The fields that are shown in the base field are those that are parsed from the base parser and some default fields that are populated by log collection such as Log Entity (the file name, database table, or other original location the log entry came from) and Original Log Content.

  6. Enter a common example value for the Base Field that you chose to extract into additional fields in the Example Base Field Content space. This is used during the test phase to show that the extended field definition is working properly.
  7. Enter the extraction expression in the Extraction Expression field and select Enabled check box.

    An extraction expression follows the normal regular expression syntax, except when specifying the extraction element, you must use a macro indicated by curly brackets { and }. There are two values inside the curly brackets separated by a colon :. The first value inside the curly brackets is the field to store the extracted data into. The second value is the regular expression that should match the value to capture from the base field.

  8. Click Test Definition to validate that the extract expression can successfully extract the desired fields from the base field example content that you provided. In case of success in the match, the Step Count is displayed which is the good measure of the effectiveness of the extract expression. If the expression is inefficient, then the extraction may timeout, and the field will not be populated.
    Note

    It is best to keep the step count under 1000 for best performance. The higher this number, the longer it will take to process your logs and make them available in the Log Explorer.
  9. Click Save.

If you use Automatic parse time only option in your source definition instead of creating a parser, then the only field that will be available for creating Extended Field Definitions will be the Original Log Content field since no other fields will be populated by the parser. See Use the Generic Parser.

Oracle Cloud Logging Analytics enables you to search for the extended fields that you’re looking for. You can search based on the how it was created, the type of base field, or with some example content of the field. Enter the example content in the Search field, or click the down arrow for the search dialog box. In the search dialog box, under Creation Type, select if the extended fields that you’re looking for are Oracle-defined or user-defined. Under Base Field, you can select from the options available. You can also specify the example content or the extraction field expression that can be used for the search. Click Apply Filters.

Table 6-1 Sample Example Content and Extended Field Extraction Expression

Description Base Field Example Content Extended Field Extraction Expression
To extract the endpoint file entension from the URI field of a Fusion Middleware Access log file

URI

/service/myservice1/endpoint/file1.jpg

{Content Type:\.(jpg|html|png|ico|jsp|htm|jspx)}

This will extract the file suffix such as jpg or html and store the value into the field Content Type. It will only extract for suffixes listed in the expression.

To extract the user name from the file path of a log entity

Log Entity

/u01/oracle/john/audit/134fa.xml

/\w+/\w+/{User Name:\w+}/\w+/\w+

To extract the start time from the Message field

Note: Event Start Time is a Timestamp data type field. If this were a numeric data type field, then the Start Time would be stored simply as a number, and not as a timestamp.

Message

Backup transaction finished. Start=1542111920

Start={Event Start Time:\d+}

Source: /var/log/messages

Parser Name: Linux Syslog Format

Message

authenticated mount request from 10.245.251.222:735 for /scratch (/scratch)

authenticated {Action:\w+} request from {Address:[\d\.]+}:{Port:\d+} for {Directory:\S+}\s(

Source: /var/log/yum.log

Parser Name: Yum Format

Message

Updated: kernel-headers-2.6.18-371.0.0.0.1.el5.x86_64

{Action:\w+}: {Package:.*}

Source: Database Alert Log

Parser Name: Database Alert Log Format (Oracle DB 11.1+)

Message

Errors in file /scratch/cs113/db12101/diag/rdbms/pteintg/pteintg/trace/pteintg_smon_3088.trc (incident=4921): ORA-07445: exception encountered: core dump [semtimedop()+10] [SIGSEGV] [ADDR:0x16F9E00000B1C] [PC:0x7FC6DF02421A] [unknown code] []

Errors in file {Trace File:\S+} (incident={Incident:\d+}): {Error ID:ORA-\d+}: exception encountered: core dump [semtimedop()+10] [SIGSEGV] [ADDR:{Address:[\w\d]+] [PC:{Program Counter:[\w\d]+}] [unknown code] []

Source: FMW WLS Server Log

Parser Name: WLS Server Log Format

Message

Server state changed to STARTING

Server state changed to {Status:\w+}

Use Labels in Sources

Oracle Cloud Logging Analytics lets you add labels or tags to log records, based on defined conditions.

When a log entry matches the condition that you have defined, a label is populated with that log entry. That label is available in your log explorer visualizations as well as for searching and filtering log entries.

You can use Oracle-defined or user created labels in the sources. To create a custom label to tag a specific log entry, see Create a Label.

  1. To use labels in an existing source, edit that source. For steps to open an Edit Source page, see Edit Source.

  2. Click the Labels tab and then click Add.

  3. Select the log field on which you want to apply the condition from the Field list.

  4. Select the operator from the Operator list.

  5. In the Condition Value field, specify the value of the condition to be matched for applying the label.

  6. In the Label field, enter the text for the label to be applied and select the Enabled check box.

    You can select from the already available Oracle-defined or user created labels.

  7. Click Save.

Oracle Cloud Logging Analytics enables you to search for the labels that you’re looking for. You can search based on any of the parameters defined for the labels. Enter the search string in the Search field. You can specify the search criteria in the search dialog box. Under Creation Type, select if the labels that you’re looking for are Oracle-defined or user-defined. Under the fields Input Field, Operator, and Output Field, you can select from the options available. You can also specify the condition value or the output value that can be used for the search. Click Apply Filters.

You can now search log data based on the labels that you’ve created. See Filter Logs by Labels.

Use the Labels to Enrich the Data Set

Optionally, if you want to select any arbitrary field and write a value to it, you can use the labels. Populating a value in an arbitrary field using the labels functionality is very similar to using Lookups. However, using the labels provides more flexibility in your matching conditions and is ideal to use when dealing with a small number of conditions - field population definitions. For example, if you have a few conditions to populate a field, then you can avoid creating and managing a lookup by using labels.

After step 6 above,

  1. Select the output field.

    Click the Edit edit icon. The Pick Output Field dialog box opens.

  2. Pick the Output Field by specifying the label to be used or by selecting from any other field. Click Apply.

    For example, the source can be configured to attach the authentication.login output value for the Security Category output field when the log record contains the input field Method set to the value CONNECT .

    Click Save.

Use the Generic Parser

Oracle Cloud Logging Analytics lets you configure your source to use a generic parser instead of creating a parser for your logs. When doing this, your logs will only have the log time parsed from the log entries if the time can be identified by Oracle Cloud Logging Analytics.

This is particularly helpful when you’re not sure about how to parse your logs or how to write regular expressions to parse your logs, and you just want to pass the raw log data to perform analysis. Typically, a parser defines how the fields are extracted from a log entry for a given type of log file. However, the generic parser in Oracle Cloud Logging Analytics can:

  • Detect the time stamp and the time zone from log entries.

  • Create a time stamp using the current time if the log entries don’t have any time stamp.

  • Detect whether the log entries are multiple lined or single lined.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  2. In the Sources page, click Create source.
    This displays the Create Source dialog box.
  3. In the Source field, enter the name for the source.
  4. In the Source Type field, select File.
  5. Click Target Type and select the type of target for this source.
  6. Select Automatically parse time only. Oracle Cloud Logging Analytics automatically applies the generic parser type.
  7. Click Save.
When you access the log records of the newly created source, Oracle Cloud Logging Analytics extracts and displays the following information from the log entries:
  • Time stamp:

    • When a log entry doesn’t have a time stamp, then the generic parser creates and displays the time stamp based on the time when the log data was collected.

    • When a log record contains a time stamp, but the time zone isn’t defined, then the generic parser uses the management agent’s time zone.

      When using Management Agent, if the timezone is not detected properly, then you can manually set the timezone in the agent configuration files. See Manually Specify Time Zone and Character Encoding for Files.

      When uploading logs using on-demand upload, you can specify the timezone along with your upload to force the timezone if we cannot properly detect it. If you're using CLI, see Command Line Reference: Logging Analytics - Upload. If you're using REST API, then see Logging Analytics API - Upload.

    • When a log file has log records with multiple time zones, the generic parser can support up to 11 time zones.

    • When a log file displays some log entries with a time zone and some without, then the generic parser uses the previously found timezone for the ones missing a timezone.

    • When you ingest logs using management agent, if the time zone or the time zone offset is not indicated in the log records, then Oracle Cloud Logging Analytics compares the last modified time of the OS with the timestamp of the last log entry to determine the proper time zone.

  • Multiple lines: When a log entry spans multiple lines, the generic parser can captures the multiline content correctly.

Set Up Database Instance Monitoring

Oracle Cloud Logging Analytics can extract database instance records based on the SQL query that you provide in the log source configuration.

Currently, the supported database types are Oracle Database Instance, Oracle Pluggable Database, Microsoft SQL Server Database Instance, and MySQL Database Instance.

For an example of how to collect logs from tables or views in Oracle Autonomous Database, see Collect Logs from Tables or Views in Oracle Autonomous Database (Tutorial icon Tutorial ).

Note

To perform remote collection for a MySQL database instance, the following configuration must be done at the database instance:

  1. To allow access from a specific host where the management agent is installed:

    1. Create the new account authenticated by the specified password:

      CREATE USER '<mysql_user>'@'<host_name>' IDENTIFIED BY '<password>';
    2. Assign READ privileges for all the databases to the mysql_user user on host host_name:

      GRANT SELECT ON *.* TO '<mysql_user>'@'<host_name>' WITH GRANT OPTION;
    3. Save the updates to the user privileges by issuing the command:

      FLUSH PRIVILEGES;
  2. To allow access to a specific database from any host:

    1. Grant READ privileges to mysql_user from any valid host:

      GRANT SELECT ON <database_name>.* TO '<mysql_user>'@'%' WITH GRANT OPTION;
    2. Save the updates to the user privileges by issuing the command:

      FLUSH PRIVILEGES;

Overall Flow for Collecting Database Logs

The following are the high-level tasks for collecting log information stored in a database:

Create the Database Entity

Create the database entity to reference your database instance and to enable log collection from it. If you are using management agent to collect logs, then after you install the management agent, you must come back here to configure the agent monitoring for the entity.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Entities.

  3. Ensure that your compartment selector on the left indicates that you are in the desired compartment for this new entity.

    Click Create.

  4. Select an Entity Type that suits your database instance, for example Oracle Database Instance.

    Provide a Name for the entity.

  5. Select Management Agent Compartment in which the agent is installed > Select the Management Agent to associate with the database entity so that the logs can be collected.

    Alternatively, you can create the entity first, edit it later and provide the management agent OCID after the agent is installed.

    Note

    Use Management Agent version 210403.1350 or later to install on your database host to ensure Microsoft SQL Server Database support.
  6. To ingest SQL, provide the following properties in case of Oracle Database Instance or Oracle Pluggable Database:

    • port
    • hostname
    • sid or service_name

      If you provide both the values, then Logging Analytics uses service_name to ingest SQL.

    For log collection from Microsoft SQL Server Database Instance and MySQL Database source, provide the following properties:

    • database_name
    • host_name
    • port

    If you intend to use Oracle-defined log sources to collect logs from management agents, it is recommended that you provide any parameter values that may already be defined for the chosen entity type. If the parameter values are not provided, then when you try to associate the source to this entity, it will fail because of the missing parameter values.

    Click Save.

Create the Database Source

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. In the Sources page, click Create Source.

    This displays the Create Source dialog box.

  4. In the Source field, enter the name for the source.

  5. From the Source Type list, select Database.

  6. Click Entity Type and select the required entity type. For example, Oracle Database Instance, Oracle Pluggable Database, Microsoft SQL Server Database Instance, or MySQL Database Instance.

  7. In the Database Queries tab, click Add to specify the details of the SQL query, based on which Oracle Cloud Logging Analytics instance collects database instance logs.

  8. Click Configure to display the Configure Column Mapping dialog box.

  9. In the Configure Column Mapping dialog box, map the SQL fields with the field names that would be displayed in the actual log records.

    Specify a Sequence Column. The value of this field must determine the sequence of the records inserted into the table. It must have unique incremental value.

    See SQL Query Guidelines.

    Note

    The first mapped field with a data type of Timestamp is used as the time stamp of the log record. If no such field is present, then the collection time is used as the time of the log record.

    When the logs are collected for the first time after you created the log source (historic log collection):

    • If any field in the SQL query is mapped to the Time field , then the value of that field is used as reference to upload the log records from previous 30 days.

    • If none of the fields in the SQL query are mapped to the Time field, then a maximum of 10,000,000 records are uploaded.

    Click Done.

  10. Repeat Step 6 through Step 8 for adding multiple SQL queries.

  11. Select Enabled for each of the SQL queries and then click Save.

Provide the Database Entity Credentials

For each entity that’s used for collecting the data defined in the Database log source, provide the necessary credentials to the agent to connect to the entity and run the SQL query. These credentials need to be registered in a credential store that’s maintained locally by the cloud agent. The credentials are used by the cloud agent to collect the log data from the entity.
  1. Log in to the host on which the management agent is installed.

  2. Create the DBCreds type credentials JSON input file. For example agent_dbcreds.json:

    cat agent_dbcreds.json
    {
        "source": "lacollector.la_database_sql",
        "name": "LCAgentDBCreds.<entity_name>",
          "type": "DBCreds",
        "usage": "LOGANALYTICS",
        "disabled": "false",
        "properties": [
            {
                "name": "DBUserName",
                "value": "CLEAR[username]"
            },
            {
                "name": "DBPassword",
                "value": "CLEAR[password]"
            },
            {
                "name": "DBRole",
                "value": "CLEAR[normal]"
            }
        ]
    }

    The following properties must be provided in the input file as in the above example agent_dbcreds.json:

    • source : "lacollector.la_database_sql"
    • name : "LCAgentDBCreds.<entity_name>"

      entity_name is the value of the Name field that you entered while creating the entity.

    • type : "DBCreds"
    • usage : "LOGANALYTICS"
    • properties : user name, password and role. Role is optional.
  3. Use the credential_mgmt.sh script with the upsertCredentials operation to add the credentials to the agent's credential store:

    Syntax:

    $cat <input_file> | sudo -u mgmt_agent /opt/oracle/mgmt_agent/agent_inst/bin/credential_mgmt.sh -o upsertCredentials -s <service_name>

    In the above command:

    • Input file: The input JSON file with the credential parameters, for example, agent_dbcreds.json.
    • Service name: Use logan as the name of the Oracle Cloud Logging Analytics plug-in deployed on the agent.

    By using the example values of the two parameters, the command would be:

    $cat agent_dbcreds.json | sudo -u mgmt_agent /opt/oracle/mgmt_agent/agent_inst/bin/credential_mgmt.sh -o upsertCredentials -s logan

    After the credentials are successfully added, you can delete the input JSON file.

    For more information about managing credentials on the management agent credential store, see Management Agent Source Credentials in Management Agent Documentation.

Create a Label

Oracle Cloud Logging Analytics will automatically apply labels to your log entries as they are ingested based on the various label definitions in your source. For example, in the Oracle-defined source Database Alert Logs, the labels are already defined for the most common ORA-xxxx error codes. When analyzing your logs, you can search by label or by the actual ORA-xxxx error code. Oracle Cloud Logging Analytics offers multiple Oracle-defined labels that you can use with Oracle-defined sources or in your custom sources. You can use the Create Label page to create new labels that can be used in your custom sources.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Labels.

  2. In the Labels page, click Create Label.
  3. In the Label field, enter the label name. For example, enter Gateway Timeout.
  4. In the Description field, enter the details about the label.
  5. Labels can be marked as being a problem with a priority to make those log entries more prominent in the Log Explorer. To assign priority to the label:
    1. Under Denotes Problem field, select Yes check box.

    2. In the Problem Priority field, click the down arrow and select a priority. For example, select High.

    A log entry will be assigned a problem priority based on the labels that get attached to the log entry. In this case, if Gateway Timeout label has a problem priority of High, any log entry that matches a condition such that it gets the Gateway Timeout label would have a problem priority of High.

  6. In the Related Terms field, enter the terms that are related to the log entry.
  7. Click Save.
You can now use the new custom label in your log source to enrich a log entry. See Use Labels in Sources.
After the custom labels are created, you can use them in the label definitions in the log sources, and you can search the log data based on the labels that you’ve created. See Filter Logs by Labels. The labels can be used for search as in the following example use-cases:
  • To obtain rapid summary of all error trends:

    In the Log Explorer, click the field Labels in the Fields section > In the Labels dialog box, enable the Show Trend Chart check box.


    Label dialog box

  • To identify the problem events:

    In the Log Explorer, drag the field Labels in the Fields section to the Group By section in the visualization panel.


    problem logs where the problem priority has been set while creating a label

  • To search across sources using the query language:


    a field summary consisting of log records that have the specified label

  • To perform analysis using query language in combination with clusters:


    the cluster visualization in combination with label specified in the query

    View the log data within the cluster for classify results:


    Description of labels_language_analysis3.png follows

Create a Field

Oracle Cloud Logging Analytics offers multiple Oracle-defined fields to use in parsers and extended field definitions. If you can’t find the right field names that you’re looking for, create custom fields that can be used with parse expressions.

Note that there are limits to the number of custom fields that can be created. Before you create a field, a message on the console will indicate the number of remaining labels for each data type that you can create.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Fields.

  2. In the Fields page, click Create Field.
  3. In the Create Field page, in the Name field, enter the name of the field you want to create. For example, enter My Custom Field.
  4. In the Type select the type of field data.
    • string Type: This field will store any type of text.
    • float Type: This field will store a numeric value with a decimal point.
    • long Type: This field will store a very large numeric whole value.
    • integer Type: This field will store a numeric whole value.
    • timestamp Type: This field will store a time-based field in a standard format.
  5. If the field can have multiple values in the log content, then select the Multi value check box.
  6. In the Description field, enter the description of field. This description can help you to identify the field in the Fields page.
You can now use the new custom field in a parser definition. See Create a Parser.

After you have ingested logs with a parser using your custom field, you can use it in the log explorer for filtering and searching. See Filter Logs by Pinned Attributes and Fields.

You can also use the field for visualizing and analyzing the log data using charts and controls. See Visualize Data Using Charts and Controls.

Create a Source By Duplicating an Existing One

If you’re not sure about how to create a source, then you can use any existing log source to create a new one.
  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  2. In the Sources page, click the menu icon next to the source entry based on which you want to create a new source and select Duplicate.
    The Create Source page is displayed with the log definition fields populated with the definitions of the existing source.
  3. Modify the source definition and click Save.

Note: It is recommended that you edit and customize an Oracle-defined source instead of making a duplicate of the Oracle-defined source and editing it. The new duplicated and edited source will not get any of the future Oracle updates.