Eventstats Command

Use the eventstats command to obtain overall summary statistics, optionally grouped by fields, on properties of groups identified by the link command. Its output will include one field for each aggregation.

Syntax

Note:

The trend aggregate operator is not permitted with eventstats.
eventstats <stats_function> (<field_name>) [as <new_field_name>] [, <stats_function> (<field_name>) [as <new_field_name>]]* [by <field_name> [, <field_name>]*]

Parameters

The following table lists the parameters used with this command, along with their descriptions.

Parameter Description

field_name

Use this parameter to specify the field according to which you want the results to be grouped.

new_field_name

Use this parameter to specify the new name for the field after applying the stats command.

Functions

The following table lists the functions available with this command, along with their examples.

Function Examples

Values

Lists the first 10 values for a particular field with optional delimiter. Can be applied to any field data type.

values(field_name)

  • values(Label)

  • values(Severity)

  • values(‘Client Host City’)

  • values(‘Client Host Country’)

  • values(‘Client Host Continent’)

Unique

Lists the first 10 unique values for a particular field with optional delimiter. Can be applied to any field data type.

unique(field_name)

  • unique(Label)

  • unique(Severity)

  • unique(‘Client Host City’)

  • unique(‘Client Host Country’)

  • unique(‘Client Host Continent’)

Earliest

Return the eldest non-null value for the specified field. Null will be returned if field is completely empty for a particular queries results.

earliest(field_name)

  • earliest(‘OS Process ID’)

Latest

Return the most recent non-null value for the specified field. Null will be returned if field is completely empty for a particular queries results.

latest(field_name)

  • latest(‘Error ID’)

Average

Note: This function is supported only for numeric fields.

avg(field_name)

  • avg(‘Content Size’)

Count

Note: This function uses semantics similar to sql; that is, count returns the count for all rows; however, count(field) returns the count for all rows where field is not null.

count(field_name)

  • count(Source)

Distinct Count

distinctcount(field_name)

  • distinctcount(Severity)

Maximum

Note: This function is supported only for numeric fields.

max(field_name)

  • max('Content Size')

Median

Note: This function is supported only for numeric fields.

median(field_name)

  • median('Content Size')

Minimum

Note: This function is supported only for numeric fields.

min(field_name)

  • min('Content Size')

Percentage

Note: This function is supported only for numeric fields.

pct(field_name, n)

  • pct('Content Size', 90)

Sum

Note: This function is supported only for numeric fields.

sum(field_name)

  • sum(‘Content Size’)

Standard Deviation

Note: This function is supported only for numeric fields.

stddev(field_name)

  • stddev(‘Content Size’)

The following table lists the functions that are unique to this command, along with their examples.

Function Examples

first

Retrieves property value from the first row, as defined by the current sort order, within the retrieved result.

Syntax: first(field_name)

  • | eventstats first('Content Size')

last

Retrieves property value from the last row, as defined by the current sort order, within the retrieved result.

Syntax: last(field_name)

  • | eventstats last('Content Size')

nthval

Retrieves property value from the nth row, as defined by the current sort order, within the retrieved result.

Syntax: nthval(field_name, n)

  • | eventstats nthval('Content Size', 2)

lag

Retrieves property value from a row at a given offset prior to the current row. Default offset is 1.

Syntax: lag(field_name)

  • | eventstats lag('Content Size')

lead

Retrieves property value from a row at a given offset after the current row. Default offset is 1.

Syntax: lead(field_name)

  • | eventstats lead('Content Size')

rownum

Assigns a unique number sequentially, starting from 1, as defined by the current sort order to each row within the retrieved result.

Syntax: rownum

  • | eventstats rownum

peak

Retrieves property value with peak magnitude, within the retrieved result. A higher value of magnitude indicates larger absolute values.

Syntax: peak

  • | eventstats peak('Content Size')

peakscore

Retrieves property value with normalized peak score between 0 and 1, within the retrieved result. The score can be used to compare the peaks, with the highest peak getting 1 as the score, and all other values between 0 and 1.

Syntax: peakscore

  • | eventstats peakscore('Content Size')

valley

Retrieves property value with valley magnitude, within the retrieved result. A lower value of magnitude indicates smaller absolute values.

Syntax: valley

  • | eventstats valley('Content Size')

valleyscore

Retrieves property value with normalized valley score between 0 and 1, within the retrieved result. The score can be used to compare the valleys, with the least valley getting 0 as the score, and all other values between 0 and 1.

Syntax: valleyscore

  • | eventstats valleyscore('Content Size')

For examples of using this command in typical scenarios, see:

Use the peak and peakscore functions to analyze sequential data in Link Visualization for peak magnitude and a normalized score between 0 and 1. For example, the following query highlights the highest peaks in user response time, using the Duration field from the Access Logs Log Source:

'Log Source' = 'FMW WebLogic Server Access Logs' 
| link span = 5minute Time, Server 
| stats avg(Duration) as 'Avg. Duration' 
| sort Server, 'Start Time' 
| eventstats peak('Avg. Duration')      as 'Peak Magnitude', 
             peakscore('Avg. Duration') as 'Peak Score' by Server 
| highlightgroups priority = high 
  [ * | where 'Peak Score' > 0.9 ] as 'High Response Time - Needs Attention'

This feature can be used for searching and grouping of time series data, like identifying all Out of Memory events that happened after a spike.

Group all the fatal logs by transaction, and get the overall average elapsed time across all the groups:

severity = fatal | link 'Transaction ID' | stats avg('Elapsed Time (System)') as 'Average Elapsed Time' | eventstats avg('Average Elapsed Time') as 'Overall Average Elapsed Time'

Group all the fatal logs by entity type and transaction, and get the overall average elapsed time across all the groups with the same entity type:

severity = fatal | link 'Entity Type', 'Transaction ID' | stats avg('Elapsed Time (System)') as 'Average Elapsed Time' | eventstats avg('Average Elapsed Time') as 'Overall Average Elapsed Time' by 'Entity Type'

In the Link visualization, add a number to each row:

'Log Source' = 'Database Alert Logs' and Label != null and Entity = MyDB 
| rename Entity as Database
| link span = 1minute Time, Database, Label
| sort Database, 'Start Time'
| eventstats rownum as 'Row Number' by Database

Identify the last event using the row number:

'Log Source' = 'Database Alert Logs' and Label != null and Entity = MyDB
| rename Entity as Database
| link span = 1minute Time, Database, Label
| sort Database, 'Start Time'
| eventstats rownum as 'Row Number' by Database
| addfields
   [ * | where Label = 'Abnormal Termination'
       | eventstats last('Row Number') as 'Crash Row'
   ]

Identify the previous and next events of a selected event:

'Log Source' = 'Database Alert Logs' and Label != null and Entity = MyDB
| rename Entity as Database
| link span = 1minute Time, Database, Label
| sort Database, 'Start Time'
| addfields
   [ *
      | where Label != null
      | eventstats lag(Label) as 'Previous Event',
                   lead(Label) as 'Next Event'
   ]