Cherry-pick #14954 to 7.x: Autodiscover provider for Nomad#23392
Merged
jsoriano merged 1 commit intoelastic:7.xfrom Jan 7, 2021
Merged
Cherry-pick #14954 to 7.x: Autodiscover provider for Nomad#23392jsoriano merged 1 commit intoelastic:7.xfrom
jsoriano merged 1 commit intoelastic:7.xfrom
Conversation
Initial features to support logs collection from applications deployed in Nomad. Add a new `nomad` autodiscover provider (based on the Kubernetes provider). With this new provider, it is possible to start new harvesters by looking at the jobs allocated on each node. With this, filebeat can be run as a system job on each node and each filebeat instance is responsible for enriching and shipping the local logs. This autodiscover provider supports hints-based autodiscover. Add a new `add_nomad_metadata` processor that matches events to specific allocations and adds the metadata. Co-authored-by: Jaime Soriano Pastor <jaime.soriano@elastic.co> (cherry picked from commit 24397d8)
Contributor
|
Pinging @elastic/integrations (Team:Integrations) |
Contributor
|
Pinging @elastic/integrations (Team:Platforms) |
Contributor
💚 Build Succeeded
Expand to view the summary
Build stats
Test stats 🧪
Steps errors
Expand to view the steps failures
|
| Test | Results |
|---|---|
| Failed | 0 |
| Passed | 17510 |
| Skipped | 1408 |
| Total | 18918 |
ChrsMark
approved these changes
Jan 7, 2021
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Cherry-pick of PR #14954 to 7.x branch. Original message:
At trivago we run an internal cloud using Nomad from Hashicorp. Our logging solution is based on ELK and we use Filebeat to ship the logs from our client nodes into Kafka where it is later on ingested into Elasticsearch using Logstash. Previously we used the and input looking for new jobs in a defined
path, but the logs lacked a lot of context/metadata from the Job definition/allocation.This PR adds a new discover module (architecture based on the Kubernetes module). With this new provider, it is possible to start new harvesters by looking at the jobs allocated on each node. We currently run filebeat as a system job on each node and each filebeat instance is responsible for enriching and shipping the local logs.
Example of the configuration for the new provider:
By using the autodiscover module it is possible to define custom processors using the
metastanza on the Nomad job (similar to how it is defined using labels on Kubernetes). For instance:This example defines a custom
dissecttokenizer for the logs of this specific task that adds thedissectfield with a content similar to:By default the following fields are added from the Nomad job/allocation:
jobnamespacestatustype(job type: system/service/batch)task.*(information about the task and custom metadata defined in the job/group/task using themetastanza)datacentersregionThe PR also includes an
add_nomad_metadataprocessor that matches events to specific allocations and adds the metadata.We've been running this in our production clusters for a few weeks now.
TODO: