I've observed an occasional panic in the add_kubernetes_metadata processor where a channel can be closed twice. This typically occurs when the Beat is being shutdown. Here is a sample from the 8.6 Elastic agent logs.
{"log.level":"error","@timestamp":"2022-12-31T19:01:47.997Z","message":"Harvester crashed with: harvester panic with: close of closed channel\ngoroutine 230 [running]:\nruntime/debug.Stack()\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x64\ngithub.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.startHarvester.func1.1()\n\t/go/src/github.com/elastic/beats/filebeat/input/filestream/internal/input-logfile/harvester.go:167 +0x64\npanic({0x104366b00, 0x10492c890})\n\t/usr/local/go/src/runtime/panic.go:844 +0x26c\ngithub.com/elastic/beats/v7/libbeat/processors/add_kubernetes_metadata.(*cache).stop(0x14000697650)\n\t/go/src/github.com/elastic/beats/libbeat/processors/add_kubernetes_metadata/cache.go:97 +0x30\ngithub.com/elastic/beats/v7/libbeat/processors/add_kubernetes_metadata.(*kubernetesAnnotator).Close(0x140006a25c0)\n\t/go/src/github.com/elastic/beats/libbeat/processors/add_kubernetes_metadata/kubernetes.go:311 +0x84\ngithub.com/elastic/beats/v7/libbeat/processors.Close({0x104959438, 0x140006a25c0})\n\t/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\ngithub.com/elastic/beats/v7/libbeat/publisher/processing.(*group).Close(0x140008f8120)\n\t/go/src/github.com/elastic/beats/libbeat/publisher/processing/processors.go:95 +0x11c\ngithub.com/elastic/beats/v7/libbeat/processors.Close({0x104959910, 0x140008f8120})\n\t/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\ngithub.com/elastic/beats/v7/libbeat/publisher/processing.(*group).Close(0x140008f80f0)\n\t/go/src/github.com/elastic/beats/libbeat/publisher/processing/processors.go:95 +0x11c\ngithub.com/elastic/beats/v7/libbeat/processors.Close({0x104959910, 0x140008f80f0})\n\t/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\ngithub.com/elastic/beats/v7/libbeat/publisher/pipeline.(*client).Close.func1()\n\t/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/client.go:167 +0x278\nsync.(*Once).doSlow(0x14000bf7e20, 0x14000743740)\n\t/usr/local/go/src/sync/once.go:68 +0xf0\nsync.(*Once).Do(0x14000bf7e20, 0x14000743740)\n\t/usr/local/go/src/sync/once.go:59 +0x48\ngithub.com/elastic/beats/v7/libbeat/publisher/pipeline.(*client).Close(0x14000bf7dd0)\n\t/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/client.go:148 +0x80\ngithub.com/elastic/beats/v7/filebeat/beater.(*countingClient).Close(0x14000bbd0b0)\n\t/go/src/github.com/elastic/beats/filebeat/beater/channels.go:145 +0x40\ngithub.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.startHarvester.func1({0x104971ef8, 0x14000bb29c0})\n\t/go/src/github.com/elastic/beats/filebeat/input/filestream/internal/input-logfile/harvester.go:219 +0xb1c\ngithub.com/elastic/go-concert/unison.(*TaskGroup).Go.func1()\n\t/go/pkg/mod/github.com/elastic/go-concert@v0.2.0/unison/taskgroup.go:163 +0xec\ncreated by github.com/elastic/go-concert/unison.(*TaskGroup).Go\n\t/go/pkg/mod/github.com/elastic/go-concert@v0.2.0/unison/taskgroup.go:159 +0x130\n","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-monitoring","type":"filestream"},"log.origin":{"file.line":168,"file.name":"input-logfile/harvester.go"},"id":"filestream-monitoring-agent","ecs.version":"1.6.0","log.logger":"input.filestream"}
Here is a more nicely formatted callstack:
Harvester crashed with: harvester panic with: close of closed channelgoroutine 230 [running]:\n
runtime/debug.Stack()\n\t
/usr/local/go/src/runtime/debug/stack.go:24 +0x64\ngithub.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.startHarvester.func1.1()\n\t
/go/src/github.com/elastic/beats/filebeat/input/filestream/internal/input-logfile/harvester.go:167 +0x64\n
panic({0x104366b00, 0x10492c890})\n\t
/usr/local/go/src/runtime/panic.go:844 +0x26c\n
github.com/elastic/beats/v7/libbeat/processors/add_kubernetes_metadata.(*cache).stop(0x14000697650)\n\t
/go/src/github.com/elastic/beats/libbeat/processors/add_kubernetes_metadata/cache.go:97 +0x30\n
github.com/elastic/beats/v7/libbeat/processors/add_kubernetes_metadata.(*kubernetesAnnotator).Close(0x140006a25c0)\n\t
/go/src/github.com/elastic/beats/libbeat/processors/add_kubernetes_metadata/kubernetes.go:311 +0x84\ngithub.com/elastic/beats/v7/libbeat/processors.Close({0x104959438, 0x140006a25c0})\n\t
/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\n
github.com/elastic/beats/v7/libbeat/publisher/processing.(*group).Close(0x140008f8120)\n\t
/go/src/github.com/elastic/beats/libbeat/publisher/processing/processors.go:95 +0x11c\n
github.com/elastic/beats/v7/libbeat/processors.Close({0x104959910, 0x140008f8120})\n\t
/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\n
github.com/elastic/beats/v7/libbeat/publisher/processing.(*group).Close(0x140008f80f0)\n\t
/go/src/github.com/elastic/beats/libbeat/publisher/processing/processors.go:95 +0x11c\n
github.com/elastic/beats/v7/libbeat/processors.Close({0x104959910, 0x140008f80f0})\n\t
/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\n
github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*client).Close.func1()\n\t
/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/client.go:167 +0x278\n
sync.(*Once).doSlow(0x14000bf7e20, 0x14000743740)\n\t/usr/local/go/src/sync/once.go:68 +0xf0\n
sync.(*Once).Do(0x14000bf7e20, 0x14000743740)\n\t
/usr/local/go/src/sync/once.go:59 +0x48\n
github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*client).Close(0x14000bf7dd0)\n\t/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/client.go:148 +0x80\ngithub.com/elastic/beats/v7/filebeat/beater.(*countingClient).Close(0x14000bbd0b0)\n\t
/go/src/github.com/elastic/beats/filebeat/beater/channels.go:145 +0x40\ngithub.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.startHarvester.func1({0x104971ef8, 0x14000bb29c0})\n\t
/go/src/github.com/elastic/beats/filebeat/input/filestream/internal/input-logfile/harvester.go:219 +0xb1c\n
github.com/elastic/go-concert/unison.(*TaskGroup).Go.func1()\n\t
/go/pkg/mod/github.com/elastic/go-concert@v0.2.0/unison/taskgroup.go:163 +0xec\ncreated by github.com/elastic/go-concert/unison.(*TaskGroup).Go\n\t
/go/pkg/mod/github.com/elastic/go-concert@v0.2.0/unison/taskgroup.go:159 +0x130
I've observed an occasional panic in the add_kubernetes_metadata processor where a channel can be closed twice. This typically occurs when the Beat is being shutdown. Here is a sample from the 8.6 Elastic agent logs.
{"log.level":"error","@timestamp":"2022-12-31T19:01:47.997Z","message":"Harvester crashed with: harvester panic with: close of closed channel\ngoroutine 230 [running]:\nruntime/debug.Stack()\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x64\ngithub.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.startHarvester.func1.1()\n\t/go/src/github.com/elastic/beats/filebeat/input/filestream/internal/input-logfile/harvester.go:167 +0x64\npanic({0x104366b00, 0x10492c890})\n\t/usr/local/go/src/runtime/panic.go:844 +0x26c\ngithub.com/elastic/beats/v7/libbeat/processors/add_kubernetes_metadata.(*cache).stop(0x14000697650)\n\t/go/src/github.com/elastic/beats/libbeat/processors/add_kubernetes_metadata/cache.go:97 +0x30\ngithub.com/elastic/beats/v7/libbeat/processors/add_kubernetes_metadata.(*kubernetesAnnotator).Close(0x140006a25c0)\n\t/go/src/github.com/elastic/beats/libbeat/processors/add_kubernetes_metadata/kubernetes.go:311 +0x84\ngithub.com/elastic/beats/v7/libbeat/processors.Close({0x104959438, 0x140006a25c0})\n\t/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\ngithub.com/elastic/beats/v7/libbeat/publisher/processing.(*group).Close(0x140008f8120)\n\t/go/src/github.com/elastic/beats/libbeat/publisher/processing/processors.go:95 +0x11c\ngithub.com/elastic/beats/v7/libbeat/processors.Close({0x104959910, 0x140008f8120})\n\t/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\ngithub.com/elastic/beats/v7/libbeat/publisher/processing.(*group).Close(0x140008f80f0)\n\t/go/src/github.com/elastic/beats/libbeat/publisher/processing/processors.go:95 +0x11c\ngithub.com/elastic/beats/v7/libbeat/processors.Close({0x104959910, 0x140008f80f0})\n\t/go/src/github.com/elastic/beats/libbeat/processors/processor.go:58 +0x8c\ngithub.com/elastic/beats/v7/libbeat/publisher/pipeline.(*client).Close.func1()\n\t/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/client.go:167 +0x278\nsync.(*Once).doSlow(0x14000bf7e20, 0x14000743740)\n\t/usr/local/go/src/sync/once.go:68 +0xf0\nsync.(*Once).Do(0x14000bf7e20, 0x14000743740)\n\t/usr/local/go/src/sync/once.go:59 +0x48\ngithub.com/elastic/beats/v7/libbeat/publisher/pipeline.(*client).Close(0x14000bf7dd0)\n\t/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/client.go:148 +0x80\ngithub.com/elastic/beats/v7/filebeat/beater.(*countingClient).Close(0x14000bbd0b0)\n\t/go/src/github.com/elastic/beats/filebeat/beater/channels.go:145 +0x40\ngithub.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.startHarvester.func1({0x104971ef8, 0x14000bb29c0})\n\t/go/src/github.com/elastic/beats/filebeat/input/filestream/internal/input-logfile/harvester.go:219 +0xb1c\ngithub.com/elastic/go-concert/unison.(*TaskGroup).Go.func1()\n\t/go/pkg/mod/github.com/elastic/go-concert@v0.2.0/unison/taskgroup.go:163 +0xec\ncreated by github.com/elastic/go-concert/unison.(*TaskGroup).Go\n\t/go/pkg/mod/github.com/elastic/go-concert@v0.2.0/unison/taskgroup.go:159 +0x130\n","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-monitoring","type":"filestream"},"log.origin":{"file.line":168,"file.name":"input-logfile/harvester.go"},"id":"filestream-monitoring-agent","ecs.version":"1.6.0","log.logger":"input.filestream"}Here is a more nicely formatted callstack: