Skip to content

Make 'internal' bridge networks accessible from host#47356

Merged
thaJeztah merged 1 commit intomoby:masterfrom
robmry:47329_restore_internal_bridge_addr
Feb 16, 2024
Merged

Make 'internal' bridge networks accessible from host#47356
thaJeztah merged 1 commit intomoby:masterfrom
robmry:47329_restore_internal_bridge_addr

Conversation

@robmry
Copy link
Contributor

@robmry robmry commented Feb 7, 2024

- What I did

Prior to release 25.0.0 (#46603), the bridge in an internal network was assigned an IP address - making the internal network accessible from the host, giving containers on the network access to anything listening on the bridge's address (or INADDR_ANY on the host).

This change restores that behaviour. It does not restore the default route that was configured in the container, because packets sent outside the internal network's subnet have always been dropped. So, a 'connect()' to an address outside the subnet will still fail fast.

Fixes #47329
Fixes

- How I did it

Partly reverted changes in #46603

- How to verify it

New integration test.

Also ...

# docker network create --internal iii
# docker run --rm -d --network iii nginx
# docker container inspect -f '{{ .NetworkSettings.Networks.iii.IPAddress }}' nginx
172.20.0.2
root@223cd28b6075:/go/src/github.com/docker/docker# curl -I http://172.20.0.2
HTTP/1.1 200 OK
Server: nginx/1.25.3
...

For IPv6 - which isn't directly affected by this change ...

# docker network create --ipv6 --subnet fd12:f481:ae0a::/64 --internal in6
53a657e669c0b16211c13010342a81ae96fccd6feb16e573fe07e6ff831c9340
# docker run -ti --rm --network in6 alpine
/ # ping 2001:4860:4860::8888
PING 2001:4860:4860::8888 (2001:4860:4860::8888): 56 data bytes
ping: sendto: Network unreachable
/ # ip a show eth0
97: eth0@if98: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd12:f481:ae0a::2/64 scope global flags 02
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe13:2/64 scope link
       valid_lft forever preferred_lft forever
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.19.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
/ # route -A inet6 -n
Kernel IPv6 routing table
Destination                                 Next Hop                                Flags Metric Ref    Use Iface
fd12:f481:ae0a::/64                         ::                                      U     256    2        0 eth0
fe80::/64                                   ::                                      U     256    1        0 eth0
::/0                                        ::                                      !n    -1     1        0 lo
::1/128                                     ::                                      Un    0      4        0 lo
fd12:f481:ae0a::2/128                       ::                                      Un    0      3        0 eth0
fe80::42:acff:fe13:2/128                    ::                                      Un    0      2        0 eth0
ff00::/8                                    ::                                      U     256    3        0 eth0
::/0

And, with nginx, similar to the above ...

# curl -I 'http://[fd12:f481:ae0a::2]'
HTTP/1.1 200 OK
Server: nginx/1.25.3
...

- Description for the changelog

Restore IP connectivity between the host and containers on an internal bridge network.

Prior to release 25.0.0, the bridge in an internal network was assigned
an IP address - making the internal network accessible from the host,
giving containers on the network access to anything listening on the
bridge's address (or INADDR_ANY on the host).

This change restores that behaviour. It does not restore the default
route that was configured in the container, because packets sent outside
the internal network's subnet have always been dropped. So, a 'connect()'
to an address outside the subnet will still fail fast.

Signed-off-by: Rob Murray <rob.murray@docker.com>
@robmry robmry marked this pull request as ready for review February 7, 2024 21:34
@robmry robmry requested a review from corhere February 7, 2024 21:34
Copy link
Contributor

@corhere corhere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it may not be possible to test in CI that the IPv6 internet is unreachable (GH Actions runners don't have IPv6 connectivity?!) could you please test manually and add it to the verification steps?

@robmry
Copy link
Contributor Author

robmry commented Feb 7, 2024

While it may not be possible to test in CI that the IPv6 internet is unreachable (GH Actions runners don't have IPv6 connectivity?!) could you please test manually and add it to the verification steps?

Done ... I considered adding an IPv6 test to ping an external address, but decided against because its "network unreachable" wouldn't really mean much - and this change doesn't actually touch IPv6, it wasn't sealed-off in the same way anyway.

@corhere
Copy link
Contributor

corhere commented Feb 8, 2024

I considered adding an IPv6 test to ping an external address, but decided against because its "network unreachable" wouldn't really mean much

Devil's advocate: it'd be meaningful if someone were to run the test in an environment with IPv6 connectivity and the test fails, and be future-proofing for a time when IPv6 connectivity does become available in CI. A test that passes in CI but fails when run locally is a debugging nightmare so if we were to write an automated test for IPv6 isolation I think it should be done in its own test separate from IPv4 which gets skipped if the IPv6 target can't be pinged from the host. Just something to think about for a nice to have follow-up, maybe, if you want.

and this change doesn't actually touch IPv6, it wasn't sealed-off in the same way anyway.

I know; I just figure we should verify that the IPv6 behaviour matches up whenever we change IPv4 behaviour, given how notoriously inconsistent libnetwork's behaviour has been historically between the two address families.

Copy link
Member

@akerouanton akerouanton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

v24 -> v25.0.2: Access to internal networks from the host is no longer possible

4 participants