-
Notifications
You must be signed in to change notification settings - Fork 5.7k
networks: prevent issues due to duplicate names #9585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
D'oh! I can write a test with a mock for the Docker client, way easier - will update PR and remove draft! |
Within the Docker API/engine, networks have unique IDs, and the name is a friendly label/alias, which notably does NOT have any guarantees around uniqueness (see moby/moby#18864 [^1]). During day-to-day interactive/CLI Compose usage, this is rarely an issue, as Compose itself isn't creating networks concurrently across goroutines. However, if multiple Compose instances are executed simultaneously (e.g. as part of a test suite that runs in parallel), this can easily occur. When it does happen, it's very confusing for users and resolving it via the `docker` CLI is not straightforward either [^2]. There's two primary changes here: * Pass `CheckDuplicates: true` to the Docker API when creating networks to reduce the likelihood of Compose creating duplicates in the first place * On `down`, list networks using a name filter and then remove them all by ID, as the Docker API will return an error if the name alias is used and maps to more than one network Hopefully, this provides a better UX, since the issue should be less likely to occur, and if it does, it can now be resolved via standard Compose workflow commands. [^1]: moby/moby#18864 [^2]: https://stackoverflow.com/questions/63776518/error-2-matches-found-based-on-name-network-nameofservice-default-is-ambiguo Signed-off-by: Milas Bowman <milas.bowman@docker.com>
nicksieger
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
glours
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
This issue can still be very easily reproduced on Any ideas what could be done? |
|
@ptrdom indeed easy to reproduce Seems we don't have a better option here, as long as multiple compose executions run concurrently, all of those will try to create resources with the same name, and engine can't guarantee unicity (see moby/moby#18864 (comment) for reason) Regarding your use case, would need more details to better understand how you run multiple execution of compose for testing purpose with the same project name. Feel free to open a dedicated issue |
|
I think @akerouanton was looking into this from the engine side. It always caused some head-scratching on my side, but was told in the past by the libnetwork maintainers "that's the way it is, and we can't promise unique names", but that may not actually be the case, and (if that's correct), we're considering (but @akerouanton may be able to fill me in on that one). |
|
^^ we may need some migration path though for cases where there's existing duplicate names 😓 |
My educated guess is that it wasn't possible due to the eventually consistent nature of Classic Swarm's control plane. Since we don't support it anymore, I can't see any good reason we can't make sure a network name is not unique.
Actually I don't think so: the last time I tried, I was unable to connect containers to networks with duplicate names, making these networks totally unusable. I think at some point the Engine broke support for this "networks with duplicate names" (dubious) feature. For the record, kind has some far-fetched code to mitigate the exact issue described by @ptrdom: it first creates its network, and then check if there's a duplicate and remove it if needed. I'll bump this in my to-do list to address it in the near future. |
|
Many thanks for hopping on this so eagerly, you guys rock! 🚀 ❤️ |
Ah! Yes that was it what you mentioned 👍
Agreed, let's see if we can this fixed. We could already create a ticket in moby/moby for this (can be just an early placeholder, but if you have some notes on the work that needs to be done, we can paste it in there, in case someone wants to start working on that (doesn't have to be "you"). |
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was trying to be. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes moby#18864, moby#20648, moby#33561, moby#40901. [This GH comment][1] makes clear network name uniqueness has never been enforced due to the eventually consistent nature of Classic Swarm datastores: > there is no guaranteed way to check for duplicates across a cluster of > docker hosts. And this is further confirmed by other comments made by @mrjana in that same issue, eg. [this one][2]: > we want to adopt a schema which can pave the way in the future for a > completely decentralized cluster of docker hosts (if scalability is > needed). This decentralized model is what Classic Swarm was. It's been superseded since then by Docker Swarm, which has a centralized control plane. To circumvent this drawback, the `NetworkCreate` endpoint accepts a `CheckDuplicate` flag. However it's not perfectly reliable as it won't catch concurrent requests. Due to this design decision, API clients like Compose have to implement workarounds to make sure names are really unique (eg. docker/compose#9585). And the daemon itself has seen a string of issues due to that decision, including some that aren't fixed to this day (for instance moby#40901): > The problem is, that if you specify a network for a container using > the ID, it will add that network to the container but it will then > change it to reference the network by using the name. To summarize, this "feature" is broken, has no practical use and is a source of pain for Docker users and API consumers. So let's just remove it for _all_ API versions. [1]: moby#18864 (comment) [2]: moby#18864 (comment) Signed-off-by: Albin Kerouanton <albinker@gmail.com>
What I did
Within the Docker API/engine, networks have unique IDs, and the
name is a friendly label/alias, which notably does NOT have any
guarantees around uniqueness (see moby/moby#18864 1).
During day-to-day interactive/CLI Compose usage, this is rarely
an issue, as Compose itself isn't creating networks concurrently
across goroutines. However, if multiple Compose instances are
executed simultaneously (e.g. as part of a test suite that runs
in parallel), this can easily occur.
When it does happen, it's very confusing for users and resolving
it via the
dockerCLI is not straightforward either 2.There's two primary changes here:
CheckDuplicates: trueto the Docker API when creatingnetworks to reduce the likelihood of Compose creating duplicates
in the first place
down, list networks using a name filter and then removethem all by ID, as the Docker API will return an error if the
name alias is used and maps to more than one network
Hopefully, this provides a better UX, since the issue should be
less likely to occur, and if it does, it can now be resolved via
standard Compose workflow commands.
Related issue
(not mandatory) A picture of a cute animal, if possible in relation with what you did

Footnotes
https://github.com/moby/moby/issues/18864 ↩
https://stackoverflow.com/questions/63776518/error-2-matches-found-based-on-name-network-nameofservice-default-is-ambiguo ↩