Add a docker plugin - dockerlogbeat#13761
Add a docker plugin - dockerlogbeat#13761fearful-symmetry merged 27 commits intoelastic:feature/dockerbeatfrom
Conversation
|
I'm still struggling to get the build system working. Docker plugins require a bunch of |
|
Okay, the build tooling stuff at least works now. Still fighting with |
|
jenkins, test this |
|
Great to see this progressing! let's try to open a PR against a feature branch as soon as we have something mergeable, you can solve the rest of TODOs in more PRs |
x-pack/dockerlogbeat/Dockerfile
Outdated
There was a problem hiding this comment.
See .go-version file for current go version in the repository.
There was a problem hiding this comment.
Is there some way to access that from mage?
There was a problem hiding this comment.
dev-tools/mage/settings.go exports GoVersion
There was a problem hiding this comment.
Docker's example logging plugin does this, so every other plugin copied it and does the same thing. I'm guessing it's to 'reset' the reader in case of some undefined error, but I can't really make sense of it.
There was a problem hiding this comment.
Implementation: https://github.com/gogo/protobuf/blob/master/io/uint32.go#L114
It does not reset the reader, but returns an ErrShortBuffer. The generated reader reads the complete event into memory and then unmarshals it. The 2e6 is the max read buffer size.
Once you get a message that is potentially bigger, you are stuck. The length field has already been read (read pointer has advanced), and you MUST NOT continue reading from the reader once you get ErrShortBuffer. This potentially leaves you with another problem.
I think it is fair to rule out old big messages (keep memory usage in check). Just creating a new reader (as done here) without consuming any more bytes will put you in the middle of a protobuf message, giving you an invalid read. If we want a reader that can drop large messages, then we need another implementation. But the current loop will happily produce errors and reinitialize the reader over and over again after until it did find a valid length field. The way protobuf is encoded, there are lengths fields for nested structures and strings all over the place. Unmarshaling will likely fail over and over again itself.
There was a problem hiding this comment.
Ah, thanks. I'll keep reading into this.
There was a problem hiding this comment.
Have you considered an alternative non-blocking mode?
There was a problem hiding this comment.
...There's a non-blocking mode?
There was a problem hiding this comment.
There was a problem hiding this comment.
We run CloseClientWithFile in a separate go-routine (building an unbuffered 'channel'), because we don't want the API to block if CloseClientWithFile blocks. Due to the mutex here we do allow CreateClientWithConfig to block indirectly, though. Is this ok?
There was a problem hiding this comment.
Yes. If CreateClientWithConfig bombs out while the HTTP handler is waiting for it, the container startup will fail and an error will be returned to the user via the CLI or whatever they're using to start. Otherwise a container might fail after a user a started it, or we get into some weird scenario where a container is running but the user doesn't know it's not logging. I'm less worried about the container stop passing its errors to the user.
There was a problem hiding this comment.
To clarify: We are ok with CreateClientWithConfig and startLoggingHandler blocking?
The WaitClose setting (currently 0 and not exposed) could force startLoggingHandler to block for as many seconds as are configured on this setting (no matter which queue).
There was a problem hiding this comment.
If you want similar behavior to other Beats, consider common.NewConfigFrom(...)
There was a problem hiding this comment.
common.NewConfigFrom(...) is called after parseCfgKeys. I had to hack this in because NewConfigFrom doesn't like the map[string]string structures we get from docker. I had to coerce the string value into an an interface wrapper so it'll handle config keys like ["192.168.1.2","192.168.1.3"]
There was a problem hiding this comment.
Hm, sounds like a bug in go-ucfg. It should happily accept map[string]string.
There was a problem hiding this comment.
Oh, it'll accept map[string]string, but if you pass it something like
map[string]string{
"output.elasticsearch.hosts" : `["192.168.1.2","192.168.1.3"]`
}You get Elasticsearch url: http://['172.18.0.3:9200,'172.18.0.2']:9200
There was a problem hiding this comment.
oh, I see. The -E flag is implemented in libbeat/cfgfile/cfgfile.go. It uses SettingFlag from libbeat/common/flags.go. The final flag parsing is implemented here: https://github.com/elastic/go-ucfg/blob/master/flag/value.go#L48
The value parser is an internal package, but maybe we can export it for use-cases like this: CLI settings getting passed indirectly.
|
Removed the |
exekias
left a comment
There was a problem hiding this comment.
Left some comments, this looks good to me!
There was a problem hiding this comment.
to restart the reader after an error?
There was a problem hiding this comment.
Yah, Steffen and I were discussing this. This was from docker's example, but it's almost certainly a terrible idea.
x-pack/dockerlogbeat/readme.md
Outdated
There was a problem hiding this comment.
can we move this to an issue?
There was a problem hiding this comment.
What exactly? How do debug?
There was a problem hiding this comment.
Oh, the Issues? Yah, that's getting ported.
e88af15 to
621c1b5
Compare
14b325a to
37a765f
Compare
|
Iteratively removing the vendored deps, since I discovered that if you break something with govendor, it's not easy to fix with govendor. |
* init commit of dockerlogbeat
* Add a docker plugin - Elastic Log Driver (#13761)
* Add a docker plugin - Elastic Log Driver (elastic#13761) (cherry picked from commit 4a7a8c3)
* init commit of dockerlogbeat
This is the result of discussions between myself and @henrikno about the needs of cloud monitoring. This PR adds a newbeat, dockerlogbeat, that's a docker logging plugin.
Right now, this is a functional PoC. When configured it'll send logs to elasticsearch, and because it uses libbeat, you can (mostly) pass it the same beat-cli style of config options you would any other beat:
--log-opt output.elasticsearch.hosts="172.18.0.2:9200"This plugin uses the libbeat publisher/pipeline library. When a new container starts that uses our plugin, the plugin checks the user-supplied config against a hashtable of libbeat pipelines. If the pipeline exists, it starts a new client for that pipeline and hands it to the logger. If not, it starts up a new pipeline with that config. When a container stops, it closes the client, and if there are no open clients associated with that pipeline, it closes the pipeline as well.
There's a detailed readme with instructions on building, running and debugging.
This is a draft that needs a lot of work. Here's a tentative TODO.
For an MVP:
--log-opts, as it's amap[string]string. If a user wants to set multiple ES hosts, they'll need to do something else.mage fmt. I can't seem to get it to add the right license headers, and it's hanging./vendor.beats/to the vendor dir in order to get the docker build to work inside a container.Close()operation to shutdown a client. [libbeat] pipeline.Close() throwing panic #13892GA release:
documentationin config.json