Description
dockerd reproducibly crashes with fatal error: runtime: out of memory while running docker logs -f.
It seems that the local log driver sometimes uses huge amounts of memory while running docker logs -f.
Steps to reproduce the issue:
- Install Docker 19.03.2 on Ubuntu 18.04.
- Set the
log-driver to local.
- Run a container that logs a lot, so we can trigger the issue fast. A suitable command is:
docker run -d --rm --name crazy_logger alpine cat /dev/urandom
- Follow the log output:
docker logs -f crazy_logger > /dev/null
- Wait for the crash or the massive memory usage (if you have enough RAM).
Describe the results you received:
dockerd crashed with the following log:
dockerd[4489]: fatal error: runtime: out of memory
dockerd[4489]: runtime stack:
dockerd[4489]: runtime.throw(0x5578d80974e2, 0x16)
dockerd[4489]: #011/usr/local/go/src/runtime/panic.go:617 +0x74
dockerd[4489]: runtime.sysMap(0xc004000000, 0xd4000000, 0x5578da578358)
dockerd[4489]: #011/usr/local/go/src/runtime/mem_linux.go:170 +0xc9
dockerd[4489]: runtime.(*mheap).sysAlloc(0x5578da55b940, 0xd267c000, 0x5578da55b950, 0x6933e)
dockerd[4489]: #011/usr/local/go/src/runtime/malloc.go:633 +0x1cf
dockerd[4489]: runtime.(*mheap).grow(0x5578da55b940, 0x6933e, 0x0)
dockerd[4489]: #011/usr/local/go/src/runtime/mheap.go:1222 +0x44
dockerd[4489]: runtime.(*mheap).allocSpanLocked(0x5578da55b940, 0x6933e, 0x5578da578368, 0x0)
dockerd[4489]: #011/usr/local/go/src/runtime/mheap.go:1150 +0x381
dockerd[4489]: runtime.(*mheap).alloc_m(0x5578da55b940, 0x6933e, 0x101, 0x0)
dockerd[4489]: #011/usr/local/go/src/runtime/mheap.go:977 +0xc6
dockerd[4489]: runtime.(*mheap).alloc.func1()
dockerd[4489]: #011/usr/local/go/src/runtime/mheap.go:1048 +0x4e
dockerd[4489]: runtime.(*mheap).alloc(0x5578da55b940, 0x6933e, 0x10101, 0xc000654f00)
dockerd[4489]: #011/usr/local/go/src/runtime/mheap.go:1047 +0x8c
dockerd[4489]: runtime.largeAlloc(0xd267b395, 0xc0006b0101, 0x3)
dockerd[4489]: #011/usr/local/go/src/runtime/malloc.go:1055 +0x9b
dockerd[4489]: runtime.mallocgc.func1()
dockerd[4489]: #011/usr/local/go/src/runtime/malloc.go:950 +0x48
dockerd[4489]: runtime.systemstack(0x0)
dockerd[4489]: #011/usr/local/go/src/runtime/asm_amd64.s:351 +0x63
dockerd[4489]: runtime.mstart()
dockerd[4489]: #011/usr/local/go/src/runtime/proc.go:1153
dockerd[4489]: goroutine 400 [running]:
dockerd[4489]: runtime.systemstack_switch()
dockerd[4489]: #011/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc000befb08 sp=0xc000befb00 pc=0x5578d67a8450
dockerd[4489]: runtime.mallocgc(0xd267b395, 0x5578d8ca01c0, 0x1, 0xc000befb28)
dockerd[4489]: #011/usr/local/go/src/runtime/malloc.go:949 +0x884 fp=0xc000befba8 sp=0xc000befb08 pc=0x5578d6758b44
dockerd[4489]: runtime.makeslice(0x5578d8ca01c0, 0xd267b395, 0xd267b395, 0x4)
dockerd[4489]: #011/usr/local/go/src/runtime/slice.go:49 +0x6e fp=0xc000befbd8 sp=0xc000befba8 pc=0x5578d679118e
dockerd[4489]: github.com/docker/docker/daemon/logger/local.decodeFunc.func1(0x5578d912c760, 0xc00000f040, 0x0)
dockerd[4489]: #011/go/src/github.com/docker/docker/daemon/logger/local/read.go:131 +0x38f fp=0xc000befcc0 sp=0xc000befbd8 pc=0x5578d7321e1f
dockerd[4489]: github.com/docker/docker/daemon/logger/loggerutils.followLogs(0xc000ba2018, 0xc000c043c0, 0xc0009680c0, 0x5578d90e4f18, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
dockerd[4489]: #011/go/src/github.com/docker/docker/daemon/logger/loggerutils/logfile.go:637 +0x4c3 fp=0xc000befdf8 sp=0xc000befcc0 pc=0x5578d7319353
dockerd[4489]: github.com/docker/docker/daemon/logger/loggerutils.(*LogFile).ReadLogs(0xc0007fe140, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xffffffffffffffff, 0x1, 0xc000c043c0)
dockerd[4489]: #011/go/src/github.com/docker/docker/daemon/logger/loggerutils/logfile.go:376 +0x269 fp=0xc000beff30 sp=0xc000befdf8 pc=0x5578d7316dc9
dockerd[4489]: github.com/docker/docker/daemon/logger/local.(*driver).readLogs(0xc000976ac0, 0xc000c043c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xffffffffffffffff, 0x1)
dockerd[4489]: #011/go/src/github.com/docker/docker/daemon/logger/local/read.go:31 +0xea fp=0xc000beff90 sp=0xc000beff30 pc=0x5578d73207ea
dockerd[4489]: runtime.goexit()
dockerd[4489]: #011/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc000beff98 sp=0xc000beff90 pc=0x5578d67aa551
dockerd[4489]: created by github.com/docker/docker/daemon/logger/local.(*driver).ReadLogs
dockerd[4489]: #011/go/src/github.com/docker/docker/daemon/logger/local/read.go:20 +0x16c
dockerd[4489]: goroutine 1 [chan receive]:
dockerd[4489]: main.(*DaemonCli).start(0xc00002b7a0, 0xc00011a540, 0x0, 0x0)
dockerd[4489]: #011/go/src/github.com/docker/docker/cmd/dockerd/daemon.go:259 +0xcbf
dockerd[4489]: main.runDaemon(...)
dockerd[4489]: #011/go/src/github.com/docker/docker/cmd/dockerd/docker_unix.go:13
dockerd[4489]: main.newDaemonCommand.func1(0xc0006eac80, 0xc00002b740, 0x0, 0x3, 0x0, 0x0)
dockerd[4489]: #011/go/src/github.com/docker/docker/cmd/dockerd/docker.go:34 +0x7d
dockerd[4489]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).execute(0xc0006eac80, 0xc00004e0d0, 0x3, 0x3, 0xc0006eac80, 0xc00004e0d0)
dockerd[4489]: #011/go/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:762 +0x467
dockerd[4489]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0006eac80, 0x0, 0x0, 0x10)
dockerd[4489]: #011/go/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:852 +0x2ee
dockerd[4489]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).Execute(...)
dockerd[4489]: #011/go/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:800
dockerd[4489]: main.main()
(Log truncated, full log can be provided)
Describe the results you expected:
Docker not crashing or using huge amount of memory.
Additional information:
- With the described setup we can reproduce the issue within seconds.
With a regular container logging normally, it can take some hours to trigger the issue.
- It only seems to happen with
docker logs -f.
- It seems to be related to the
local logging driver. We tested briefly with json-file and could not reproduce the issue.
- On a VM with 32 GiB of RAM, dockerd does not crash, but uses around 13 GiB of memory after a few seconds.
- Letting the crazy logging container run without following the log seems to be no problem and the memory usage of dockerd is inconspicuous.
Output of docker version:
Client: Docker Engine - Community
Version: 19.03.2
API version: 1.40
Go version: go1.12.8
Git commit: 6a30dfc
Built: Thu Aug 29 05:29:11 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.2
API version: 1.40 (minimum version 1.12)
Go version: go1.12.8
Git commit: 6a30dfc
Built: Thu Aug 29 05:27:45 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
Output of docker info:
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 2
Server Version: 19.03.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: local
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-1050-aws
Operating System: Ubuntu 18.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.794GiB
Name: ip-10-1-16-91
ID: QCAE:K33Q:RKON:R3L2:KV5V:EZVD:3SOQ:EHHX:X36A:F5NU:7W5W:F2KO
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Additional environment details (AWS, VirtualBox, physical, etc.):
We reproduced it on a AWS EC2 instance (t3.medium in eu-central-1) based on ami-08a162fe1419adb2a (Ubuntu 18.04 LTS 20190918).
Description
dockerdreproducibly crashes withfatal error: runtime: out of memorywhile runningdocker logs -f.It seems that the
locallog driver sometimes uses huge amounts of memory while runningdocker logs -f.Steps to reproduce the issue:
log-drivertolocal.Describe the results you received:
dockerdcrashed with the following log:(Log truncated, full log can be provided)
Describe the results you expected:
Docker not crashing or using huge amount of memory.
Additional information:
With a regular container logging normally, it can take some hours to trigger the issue.
docker logs -f.locallogging driver. We tested briefly withjson-fileand could not reproduce the issue.Output of
docker version:Output of
docker info:Additional environment details (AWS, VirtualBox, physical, etc.):
We reproduced it on a AWS EC2 instance (t3.medium in eu-central-1) based on
ami-08a162fe1419adb2a(Ubuntu 18.04 LTS 20190918).