Intermittently, the monitoring started by Agent enters a loop where it repeats this message ~10K times a second:
{"log.level":"error","@timestamp":"2023-03-02T11:59:42.394Z","message":"Failed to publish event: attempt to use a closed processor","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-monitoring","type":"filestream"},"log":{"source":"filestream-monitoring"},"log.logger":"publisher","log.origin":{"file.line":102,"file.name":"pipeline/client.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
This error is inconsistent -- on some runs it begins soon after startup, on many runs it never happens at all. When it does happen, it severely degrades or blocks other ingestion. Subsequent runs using identical configurations with identical binaries are still inconsistent as to whether this bug occurs.
All the times I've encountered this so far have been while testing the shipper, but since most of my agent runs are testing the shipper I'm uncertain if that's a reliable correlation: the error is happening before events reach the shipper output, which is the first place event handling diverges significantly when the shipper is in use. (There could be some shipper-specific agent-injected configuration that would affect the processor list, though I'm not aware of anything like that.)
The most recent configuration I used to exhibit the error is:
outputs:
default:
type: elasticsearch
log_level: debug
enabled: true
hosts: [https://127.0.0.1:9200]
username: "elastic"
password: [password]
allow_older_versions: true
ssl:
verification_mode: none
shipper:
enabled: true
inputs:
- type: system/metrics
id: unique-system-metrics-input
data_stream.namespace: default
use_output: default
streams:
- metricset: cpu
data_stream.dataset: system.cpu
- metricset: memory
data_stream.dataset: system.memory
- metricset: network
data_stream.dataset: system.network
- metricset: filesystem
data_stream.dataset: system.filesystem
Intermittently, the monitoring started by Agent enters a loop where it repeats this message ~10K times a second:
{"log.level":"error","@timestamp":"2023-03-02T11:59:42.394Z","message":"Failed to publish event: attempt to use a closed processor","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-monitoring","type":"filestream"},"log":{"source":"filestream-monitoring"},"log.logger":"publisher","log.origin":{"file.line":102,"file.name":"pipeline/client.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}This error is inconsistent -- on some runs it begins soon after startup, on many runs it never happens at all. When it does happen, it severely degrades or blocks other ingestion. Subsequent runs using identical configurations with identical binaries are still inconsistent as to whether this bug occurs.
All the times I've encountered this so far have been while testing the shipper, but since most of my agent runs are testing the shipper I'm uncertain if that's a reliable correlation: the error is happening before events reach the shipper output, which is the first place event handling diverges significantly when the shipper is in use. (There could be some shipper-specific agent-injected configuration that would affect the processor list, though I'm not aware of anything like that.)
The most recent configuration I used to exhibit the error is: