Skip to content

[System]: mistake in system-auth second date processor #14782

@BBQigniter

Description

@BBQigniter

Integration Name

System [system]

Dataset Name

system.auth

Integration Version

2.5.2

Agent Version

9.1.0

Agent Output Type

elasticsearch

Elasticsearch Version

9.1.0

OS Version and Architecture

Fedora 42

Software/API Version

No response

Error Message

none

Event Original

effects all events that go through the logs-system.auth-2.5.2-log pipeline.

What did you do?

I added following processor in a "System" agent-policy to all collectors to get the logs with a correct timezone offset.

- drop_fields:
    fields:  
      - 'event.timezone'
- add_fields:
    target: event
    fields:
      timezone:  'Europe/Vienna'

What did you see?

That the timezone offset only worked for events for dataset system.syslog

system.auth still showed the wrong @timestamp in Kibana.

What did you expect to see?

That the events are displayed in Kibana with the correct/corresponding timestamp and not with some future timestamp.

Anything else?

I then had a look at the system.syslog and system.auth pipelines and spotted that the second date filter that actually should take the set event.timezone field into account might have a small mistake in there. So I created a custom-pipeline

PUT _ingest/pipeline/logs-system.auth@custom
{
  "description": "fixing timezone offset - in the original pipeline they have a mistake in the corresponding date-processor!",
  "processors": [
    {
      "grok": {
        "field": "event.original",
        "patterns": [
          "^<%{NONNEGINT}>(?:%{NONNEGINT} )?+(?:-|%{TIMESTAMP:system.auth.timestamp})  +(?:-|%{IPORHOST}) +(?:-|%{SYSLOG5424PRINTASCII})  +(?:-|%{POSINT}) +(?:-|%{SYSLOG5424PRINTASCII})  +(?:-|%{SYSLOG5424SD})?  +%{GREEDYDATA}$",
          "^%{TIMESTAMP:system.auth.timestamp} %{SYSLOGHOST}? %{DATA}(?:\\[%{POSINT])?:%{SPACE}%{GREEDYMULTILINE}$",
          "^<%{NONNEGINT}>(?:%{NONNEGINT} )?%{TIMESTAMP:system.auth.timestamp} %{SYSLOGHOST}? %{DATA}(?:\\[%{POSINT}\\])?:%{SPACE}%{GREEDYMULTILINE}$"
        ],
        "pattern_definitions": {
          "GREEDYMULTILINE": "(.|\\n)*",
          "TIMESTAMP": "(?:%{TIMESTAMP_ISO8601}|%{SYSLOGTIMESTAMP})"
        },
        "if": "ctx.log?.syslog != null",
        "tag": "grok-message-header",
        "description": "Grok the message header."
      }
    },
    {
      "date": {
        "field": "system.auth.timestamp",
        "formats": [
          "MMM  d HH:mm:ss",
          "MMM dd HH:mm:ss",
          "ISO8601"
        ],
        "target_field": "@timestamp",
        "timezone": "{{ event.timezone }}",
        "if": "ctx.event?.timezone != null && ctx['@timestamp'] != null",
        "tag": "date_timestamp_tz",
        "on_failure": [
          {
            "append": {
              "field": "error.message",
              "value": "{{{ _ingest.on_failure_message }}}"
            }
          }
        ]
      }
    },
    {
      "remove": {
        "tag": "remove_timestamp",
        "field": "system.auth.timestamp",
        "ignore_missing": true
      }
    }
  ]
}

With this custom-pipeline I get correctly fixed @timestamp.

If you compare the default system.auth-pipeline line

with the line from the processor of the system.log-pipeline you see that there is an additional curly bracket.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions