Skip to content

dockerd seems to ignore loglevel setting #37343

@nscheer

Description

@nscheer

Description

Docker is configured to show only warnings (or even more severe events) but does not seem to account for the setting.

Steps to reproduce the issue:

  1. Use the following daemon.json:
{
    "storage-driver": "overlay2",
    "log-driver": "json-file",
    "log-opts":
    {
        "max-size": "10m",
        "max-file": "3"
    },
    "log-level": "warn"
}
  1. Do something to produce info log messages - e.g. start a container or restart daemon.

Describe the results you received:

Still seeing "info" messages, e.g.

Jun 25 17:14:04 app1 dockerd: time="2018-06-25T17:14:04+02:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/52736cb482db51c221afde95eaa637474d96197fc6ce287e769b92e072d11f9f/shim.sock" debug=false module="containerd/tasks" pid=15673
Jun 25 17:14:05 app1 dockerd: time="2018-06-25T17:14:05+02:00" level=info msg="shim reaped" id=f4e103bdde0753b506aba7ee269696db8fea9c8fc9432828d7f1e3717d5456c9 module="containerd/tasks"

Describe the results you expected:

No informational messages should be visible in the log.

Output of docker version:

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:20:16 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:23:58 2018
  OS/Arch:      linux/amd64
  Experimental: false

Output of docker info:

Containers: 32
 Running: 29
 Paused: 0
 Stopped: 3
Images: 7
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: 1aife2eipnelvxw2enugh4t0i
 Is Manager: true
 ClusterID: 4f6i99yx17lfmnqi7510bfizl
 Managers: 3
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 10
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: *****
 Manager Addresses:
  *****:2377
  *****:2377
  *****:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-693.21.1.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.4 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 31.12GiB
Name: app2
ID: 7CKM:M5BL:GOSN:TVCL:32YP:GVNI:BJKZ:U4XP:WR4C:KNV5:PEIO:DFDJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Maybe I misunderstood the log-level setting - maybe there's another switch?

Any help is appreciated!

Thanks!

Greetings

Nico

PS: Edited to remove some (maybe) sensitive information.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/daemonCore Enginearea/runtimeRuntimekind/bugBugs are bugs. The cause may or may not be known at triage time so debugging may be needed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions