-
Notifications
You must be signed in to change notification settings - Fork 18.9k
Open
Labels
area/networkingNetworkingNetworkingarea/security/usernsarea/swarmkind/bugBugs are bugs. The cause may or may not be known at triage time so debugging may be needed.Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.version/1.12
Description
Output of docker version:
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
Output of docker info:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 4
Server Version: 1.12.1
Storage Driver: devicemapper
Pool Name: docker-202:17-100667008-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.49 GB
Data Space Total: 107.4 GB
Data Space Available: 30.68 GB
Metadata Space Used: 2.908 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/500000.500000/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/500000.500000/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: active
NodeID: 3knd83ehdwck7rmmc3dap8ety
Is Manager: true
ClusterID: d107ppgdkmez2bmzg3wyab68d
Managers: 1
Nodes: 4
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 172.18.50.14
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.10.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.389 GiB
Name: ip-172-18-50-14.ec2.internal
ID: AYJ2:O4KD:3G23:HRBD:WN5Q:BN6T:BSNS:BNF4:ORUD:3O7G:GGNA:RXTN
Docker Root Dir: /var/lib/docker/500000.500000
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
Additional environment details (AWS, VirtualBox, physical, etc.):
AWS, 4 node swarm, 1 manager, 3 workers
Steps to reproduce the issue:
- Create overlay network:
docker network create -d overlay --subnet 10.10.0.0/16 redis_net
- Create 2 services attached to that network
docker service create --name redis1 --replicas=1 --network redis_net redis
docker service create --name redis2 --replicas=1 --network redis_net redis
- Try to connect to the other service via the DNS entry/VIP
Describe the results you received:
I receive no route to host when connecting to the VIP but am able to connect if I use the containers IP directly.
# redis-cli -h redis2
Could not connect to Redis at redis2:6379: No route to host
Could not connect to Redis at redis2:6379: No route to host
not connected> quit
# redis-cli -h 10.10.0.5
10.10.0.5:6379> quit
Describe the results you expected:
I expected to be able to connect to the service using the VIP created for the service and route accordingly.
Additional information you deem important (e.g. issue happens only occasionally):
I have noticed this when the nodes are on different servers in the cluster.
IP addr from the redis2 container:
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
49: eth0@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:0a:00:05 brd ff:ff:ff:ff:ff:ff
inet 10.10.0.5/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.10.0.4/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe0a:5/64 scope link
valid_lft forever preferred_lft forever
51: eth1@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.4/16 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe14:4/64 scope link
valid_lft forever preferred_lft forever
redis_net inspect:
[root@ip-172-18-50-14 ~]# docker network inspect redis_net
[
{
"Name": "redis_net",
"Id": "0gwzlueohh0rav3ouxzjaan5i",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.10.0.0/16",
"Gateway": "10.10.0.1"
}
]
},
"Internal": false,
"Containers": {
"09f944032c1b6428877c9fa94e91c708ab940776c682da356a18c66f09fc6223": {
"Name": "redis2.1.4bpidsb88mt0bkr7zucb8mvga",
"EndpointID": "bc79ed152485a74177f54716dd48989bdf6cbedd839876c21b90eb4ad46f878a",
"MacAddress": "02:42:0a:0a:00:05",
"IPv4Address": "10.10.0.5/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
area/networkingNetworkingNetworkingarea/security/usernsarea/swarmkind/bugBugs are bugs. The cause may or may not be known at triage time so debugging may be needed.Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.version/1.12