This came about as a result of my work on a docker plugin.
When a container stops and the libbeat pipeline assocated with it closes, libbeat throws a kernel panic. (from the docker logs, cleaned up best I can)
panic: send on closed channel
goroutine 89 [running]:
github.com/elastic/beats/libbeat/publisher/pipeline.(*retryer).loop(0xc000291080)
/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/retry.go:131
created by github.com/elastic/beats/libbeat/publisher/pipeline.newRetryer
/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/retry.go:86 +0x11e
It's some kind of race, since it doesn't happen 100% of the time. I haven't been able to hack around it. Here's what's happening from the beat side in the close method:
// deincrement the ref count
hash := cl.pipelineHash
pm.pipelines[hash].refCount--
pm.Logger.Infof("Closing Client first from pipelineManager")
//This closes the client associated with the pipeline
err := cl.Close()
if err != nil {
return errors.Wrap(err, "error closing client")
}
if pm.pipelines[hash].refCount < 1 {
pm.Logger.Infof("Pipeline closing")
// now close the pipeline
pm.pipelines[hash].pipeline.Close()
delete(pm.pipelines, hash)
}
It appears this is happening when a pipeline closes with an event still in the queue. It seems like this should be handled behavior?
This came about as a result of my work on a docker plugin.
When a container stops and the libbeat pipeline assocated with it closes, libbeat throws a kernel panic. (from the docker logs, cleaned up best I can)
It's some kind of race, since it doesn't happen 100% of the time. I haven't been able to hack around it. Here's what's happening from the beat side in the close method:
It appears this is happening when a pipeline closes with an event still in the queue. It seems like this should be handled behavior?