Skip to content

Add port configs to Sandbox and libnetwork vendoring#21019

Merged
cpuguy83 merged 2 commits intomoby:masterfrom
aboch:se
Mar 10, 2016
Merged

Add port configs to Sandbox and libnetwork vendoring#21019
cpuguy83 merged 2 commits intomoby:masterfrom
aboch:se

Conversation

@aboch
Copy link
Copy Markdown
Contributor

@aboch aboch commented Mar 8, 2016

Vendoring libnetwork 0.7.0-dev.5:

Add exposed ports and port-mapping configs to Sandbox:

@aboch
Copy link
Copy Markdown
Contributor Author

aboch commented Mar 8, 2016

Not ready for review.

@thaJeztah thaJeztah added this to the 1.11.0 milestone Mar 8, 2016
@thaJeztah thaJeztah changed the title Add port configs to Sandbox and libnetwork vendoring [WIP] Add port configs to Sandbox and libnetwork vendoring Mar 8, 2016
@thaJeztah
Copy link
Copy Markdown
Member

@aboch I added "WIP" to the PR's title, can you remove that when it's ready for review?

@icecrime icecrime added the status/failing-ci Indicates that the PR in its current state fails the test suite label Mar 8, 2016
@aboch aboch changed the title [WIP] Add port configs to Sandbox and libnetwork vendoring Add port configs to Sandbox and libnetwork vendoring Mar 9, 2016
Signed-off-by: Alessandro Boch <aboch@docker.com>
@mavenugo
Copy link
Copy Markdown
Contributor

mavenugo commented Mar 9, 2016

This PR is ready for review. It carries multiple networking related fixes (from libnetwork) and the only docker side change is the migration of port-map config from endpoint to sandbox object. Some background on this change. During experimental version of docker (1.7+), we originally designed the endpoint object to carry the construct of port-mapping. But considering the backward compatible nature of port-mapping being a container property, we were trying to accommodate that through some unsustainable back-end sandbox / endpoint management. With this change, we are paying of that technical debt.

@thaJeztah
Copy link
Copy Markdown
Member

WindowsTP4 timed out. We need to raise the limit

@thaJeztah thaJeztah removed the status/failing-ci Indicates that the PR in its current state fails the test suite label Mar 9, 2016
@thaJeztah
Copy link
Copy Markdown
Member

ping @calavera can you have a look? This PR is needed for other PR's coming 😄

@icecrime
Copy link
Copy Markdown
Contributor

icecrime commented Mar 9, 2016

LGTM

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this can just be portSpecs nat.PortSet since the only thing we are doing with it is portSpecs = container.Config.ExposedPorts and then ranging over it.

For that matter, maybe we can get rid of the variable altogether and range over container.Config.ExposedPorts directly.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, looks like. Will take care of this. Thanks.

@cpuguy83
Copy link
Copy Markdown
Member

cpuguy83 commented Mar 9, 2016

Just the 1 nit

Signed-off-by: Alessandro Boch <aboch@docker.com>
@icecrime
Copy link
Copy Markdown
Contributor

@cpuguy83 PTAL!

@cpuguy83
Copy link
Copy Markdown
Member

LGTM

cpuguy83 added a commit that referenced this pull request Mar 10, 2016
Add port configs to Sandbox and libnetwork vendoring
@cpuguy83 cpuguy83 merged commit d99be39 into moby:master Mar 10, 2016
@etoews
Copy link
Copy Markdown
Contributor

etoews commented Apr 14, 2016

I've asked a question in the forums about Enabling DNS Round Robin. I thought somebody here might know the answer and be willing to post over there.

@thaJeztah
Copy link
Copy Markdown
Member

@everett-toews if works out of the box in Docker 1.11;

Create a network and join two containers, give them both the same net-alias;

docker network create foobar
docker run -d --name web1 --net foobar --net-alias web nginx
docker run -d --name web2 --net foobar --net-alias web nginx

run another container, and attach it to the same network;

docker run -it --net foobar ubuntu

root@6aeee190588f:/# ping web
PING web (172.18.0.2) 56(84) bytes of data.
64 bytes from web1.foobar (172.18.0.2): icmp_seq=1 ttl=64 time=0.171 ms
64 bytes from web1.foobar (172.18.0.2): icmp_seq=2 ttl=64 time=0.121 ms
64 bytes from web1.foobar (172.18.0.2): icmp_seq=3 ttl=64 time=0.129 ms
^C
--- web ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.121/0.140/0.171/0.023 ms
root@6aeee190588f:/# ping web
PING web (172.18.0.3) 56(84) bytes of data.
64 bytes from web2.foobar (172.18.0.3): icmp_seq=1 ttl=64 time=0.170 ms
64 bytes from web2.foobar (172.18.0.3): icmp_seq=2 ttl=64 time=0.111 ms
64 bytes from web2.foobar (172.18.0.3): icmp_seq=3 ttl=64 time=0.111 ms
^C
--- web ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.111/0.130/0.170/0.030 ms

@etoews
Copy link
Copy Markdown
Contributor

etoews commented Apr 14, 2016

I can confirm everything works as expected when using the docker CLI like you did along with my nginx load balancer.

I suspect compose just hasn't caught up yet. I thought my compose config was the equivalent of the above but I could I misconfigured something too.

@thaJeztah
Copy link
Copy Markdown
Member

@everett-toews it may be worth checking if nginx doesn't cache DNS (I think I heard some issues about that happening), you can try to docker exec into the container, and see if a simple ping (like I did) does work. If that also fails, might be worth opening an issue in the docker-compose issue tracker

@etoews
Copy link
Copy Markdown
Contributor

etoews commented Apr 14, 2016

@thaJeztah Good call. It was nginx doing DNS caching.

You can override it, see resolver

By default, nginx caches answers using the TTL value of a response. An optional valid parameter allows overriding it

I updated my example and compose is working now too. Sorry for the nginx related debugging noise!

@thaJeztah
Copy link
Copy Markdown
Member

@everett-toews not at all! Happy to hear it's resolved (and of course that there's no bug involved) 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

8 participants