This has been a long-standing issue in #13990 which we'll want to address sooner rather than later.
Right now, the docker plugin has no support fort the docker logs command, which is a must for cloud adoption. We need to entirely re-implement this behavior in the plugin. This behavior needs to be entirely local, as one of the primary use cases for docker logs is grabbing logs when upstream elasticsearch outputs are down. I've talked with @urso about this, and our best bet to implement this in a short period of time would be to generate a log file for each configured pipeline that we can spit back when the user asks for it. There's also the fs-backed buffer that @faec is working on, although it might not be ready yet. This leaves us with two remaining questions:
- Where in the pipeline do we siphon logs?
- How do we write and manage log files? Does a "reaper" process periodically prune them?
Keep in mind we need to support the following options:
- A "since" date that reports logs after a cutoff
- A "follow" option
- A count of logs to return
This has been a long-standing issue in #13990 which we'll want to address sooner rather than later.
Right now, the docker plugin has no support fort the
docker logscommand, which is a must for cloud adoption. We need to entirely re-implement this behavior in the plugin. This behavior needs to be entirely local, as one of the primary use cases fordocker logsis grabbing logs when upstream elasticsearch outputs are down. I've talked with @urso about this, and our best bet to implement this in a short period of time would be to generate a log file for each configured pipeline that we can spit back when the user asks for it. There's also the fs-backed buffer that @faec is working on, although it might not be ready yet. This leaves us with two remaining questions:Keep in mind we need to support the following options: