fix(deps): remove vulnerable Docker SDK path from root module#188
fix(deps): remove vulnerable Docker SDK path from root module#188SantiagoDePolonia merged 2 commits intomainfrom
Conversation
📝 WalkthroughWalkthroughThis pull request replaces the Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/integration/main_test.go (1)
139-205:⚠️ Potential issue | 🟠 MajorGive each Mongo bootstrap phase its own timeout budget.
readyCtxstarts before the first shell ping and keeps ticking throughrs.initiate, the replica-set readiness loop, and the final driver ping. Becausers.initiateitself runs on the 10-minute parent context, a slow initiate/election can consume most of the 60-second budget before the later waits even start, causing false setup failures. Create a fresh timeout per phase, or wrap the whole bootstrap in a longer parent readiness deadline.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/main_test.go` around lines 139 - 205, The readiness timeout (readyCtx) is reused across multiple bootstrap phases (initial shell ping via waitForCondition, rs.initiate via mongoContainer.exec, replica-set readiness wait, and the final mongoClient.Ping), so a slow phase can exhaust the 60s budget and cause later waits to fail; fix by giving each phase its own context.WithTimeout (e.g., create new readyCtx/ctxTimeout before each call to waitForCondition, before the rs.initiate mongoContainer.exec call, before the replica-set readiness wait, and before the final mongoClient.Ping), ensure you call the corresponding cancel() for each context, or alternatively extend the overall parent deadline if you intend a longer shared budget; update references around waitForCondition, mongoContainer.exec(rs.initiate), and mongoClient.Ping to use the fresh per-phase contexts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/docker_test_internal_test.go`:
- Around line 1-33: The tests for parseDockerRunContainerID and
isLikelyDockerContainerID are currently under an integration build tag; move the
pure-string helper functions (parseDockerRunContainerID and
isLikelyDockerContainerID) into a non-build-tagged source file and add
corresponding unit tests in an untagged *_test.go file (or remove the
integration build tag from the test file) so they run in the normal unit-test
workflow without Docker; ensure the helper function signatures remain unchanged
and update imports if needed so the new test file references
parseDockerRunContainerID and isLikelyDockerContainerID directly.
In `@tests/integration/docker_test.go`:
- Around line 20-88: The file uses a custom Docker wrapper (dockerRunDetached,
runDocker, parseDockerRunContainerID usage, and dockerContainer methods
exec/hostPort/ip/terminate); replace these with testcontainers-go APIs: create
containers via testcontainers.GenericContainer/CreateContainerRequest, set
Image, ExposedPorts and WaitingFor strategies, obtain host and mapped ports
using container.Host(ctx) and container.MappedPort(ctx, nat.Port(...)), run
commands with container.Exec(ctx, []string{...}), and terminate with
container.Terminate(ctx); alternatively, if you must avoid adding testcontainers
to the root module, move the integration tests into a nested module and update
imports to use github.com/testcontainers/testcontainers-go and adjust tests to
use the container methods above instead of
dockerRunDetached/runDocker/dockerContainer.
- Around line 109-128: dockerPublishedHost() currently returns 127.0.0.1 when
DOCKER_HOST is empty, which breaks remote docker contexts; update it to, when
raw == "", run `docker context inspect` for the active context (e.g., `docker
context inspect <name>` or `docker context inspect` without args) and parse the
JSON to extract the docker endpoint host (look under the Endpoints -> docker ->
Host or similar field), use that hostname if present, and only fall back to
"127.0.0.1" on error or if no host found; ensure you handle command errors, JSON
parsing errors, and keep existing parsing for DOCKER_HOST in
dockerPublishedHost().
---
Outside diff comments:
In `@tests/integration/main_test.go`:
- Around line 139-205: The readiness timeout (readyCtx) is reused across
multiple bootstrap phases (initial shell ping via waitForCondition, rs.initiate
via mongoContainer.exec, replica-set readiness wait, and the final
mongoClient.Ping), so a slow phase can exhaust the 60s budget and cause later
waits to fail; fix by giving each phase its own context.WithTimeout (e.g.,
create new readyCtx/ctxTimeout before each call to waitForCondition, before the
rs.initiate mongoContainer.exec call, before the replica-set readiness wait, and
before the final mongoClient.Ping), ensure you call the corresponding cancel()
for each context, or alternatively extend the overall parent deadline if you
intend a longer shared budget; update references around waitForCondition,
mongoContainer.exec(rs.initiate), and mongoClient.Ping to use the fresh
per-phase contexts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 466b0a51-2f73-405e-9480-63423b426f28
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (8)
CLAUDE.mdMakefiledocs/TESTING_STRATEGY.mdgo.modtests/integration/doc.gotests/integration/docker_test.gotests/integration/docker_test_internal_test.gotests/integration/main_test.go
| //go:build integration | ||
|
|
||
| package integration | ||
|
|
||
| import "testing" | ||
|
|
||
| func TestParseDockerRunContainerID(t *testing.T) { | ||
| t.Parallel() | ||
|
|
||
| output := "Unable to find image 'postgres:16-alpine' locally\r\n" + | ||
| "16-alpine: Pulling from library/postgres\r\n" + | ||
| "Status: Downloaded newer image for postgres:16-alpine\r\n" + | ||
| "e591860a76dbb858258e678a2d9c76fbd5fc3e6c5bf6e86f93266a14ca6be7f8\r\n" | ||
|
|
||
| id, err := parseDockerRunContainerID(output) | ||
| if err != nil { | ||
| t.Fatalf("parseDockerRunContainerID returned error: %v", err) | ||
| } | ||
|
|
||
| const want = "e591860a76dbb858258e678a2d9c76fbd5fc3e6c5bf6e86f93266a14ca6be7f8" | ||
| if id != want { | ||
| t.Fatalf("parseDockerRunContainerID returned %q, want %q", id, want) | ||
| } | ||
| } | ||
|
|
||
| func TestParseDockerRunContainerIDRejectsUnexpectedOutput(t *testing.T) { | ||
| t.Parallel() | ||
|
|
||
| _, err := parseDockerRunContainerID("Status: Downloaded newer image for postgres:16-alpine") | ||
| if err == nil { | ||
| t.Fatal("parseDockerRunContainerID unexpectedly succeeded") | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Move these parser checks out of the integration build.
parseDockerRunContainerID and isLikelyDockerContainerID are pure string helpers, but this build tag forces their coverage into the Docker-backed integration path. Please move the pure helpers into an untagged file/package that the regular unit-test workflow can exercise, or extend the unit-test target so these tests run without Docker.
As per coding guidelines, **/*_test.go: "Unit tests must be written in *_test.go files alongside implementation files with no Docker dependency".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/docker_test_internal_test.go` around lines 1 - 33, The
tests for parseDockerRunContainerID and isLikelyDockerContainerID are currently
under an integration build tag; move the pure-string helper functions
(parseDockerRunContainerID and isLikelyDockerContainerID) into a
non-build-tagged source file and add corresponding unit tests in an untagged
*_test.go file (or remove the integration build tag from the test file) so they
run in the normal unit-test workflow without Docker; ensure the helper function
signatures remain unchanged and update imports if needed so the new test file
references parseDockerRunContainerID and isLikelyDockerContainerID directly.
| func waitForCondition(ctx context.Context, interval time.Duration, check func(context.Context) error) error { | ||
| var lastErr error | ||
|
|
||
| for { | ||
| attemptCtx, cancel := context.WithTimeout(ctx, interval) | ||
| lastErr = check(attemptCtx) | ||
| cancel() | ||
| if lastErr == nil { | ||
| return nil | ||
| } | ||
|
|
||
| select { | ||
| case <-ctx.Done(): | ||
| return fmt.Errorf("%w: last error: %v", ctx.Err(), lastErr) | ||
| case <-time.After(interval): | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Decouple the retry cadence from the per-attempt deadline.
Every check is canceled after exactly interval, and the Mongo call sites pass 2*time.Second. Spawning docker exec ... mongosh can exceed that on slower CI or Docker Desktop hosts, so the readiness loop can fail even when the container is healthy. Give each attempt a longer timeout than the sleep interval.
Suggested fix
-func waitForCondition(ctx context.Context, interval time.Duration, check func(context.Context) error) error {
+func waitForCondition(ctx context.Context, interval, attemptTimeout time.Duration, check func(context.Context) error) error {
var lastErr error
for {
- attemptCtx, cancel := context.WithTimeout(ctx, interval)
+ attemptCtx, cancel := context.WithTimeout(ctx, attemptTimeout)
lastErr = check(attemptCtx)
cancel()Then call the Mongo readiness checks with something like interval=2*time.Second and attemptTimeout=10*time.Second.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func waitForCondition(ctx context.Context, interval time.Duration, check func(context.Context) error) error { | |
| var lastErr error | |
| for { | |
| attemptCtx, cancel := context.WithTimeout(ctx, interval) | |
| lastErr = check(attemptCtx) | |
| cancel() | |
| if lastErr == nil { | |
| return nil | |
| } | |
| select { | |
| case <-ctx.Done(): | |
| return fmt.Errorf("%w: last error: %v", ctx.Err(), lastErr) | |
| case <-time.After(interval): | |
| } | |
| } | |
| } | |
| func waitForCondition(ctx context.Context, interval, attemptTimeout time.Duration, check func(context.Context) error) error { | |
| var lastErr error | |
| for { | |
| attemptCtx, cancel := context.WithTimeout(ctx, attemptTimeout) | |
| lastErr = check(attemptCtx) | |
| cancel() | |
| if lastErr == nil { | |
| return nil | |
| } | |
| select { | |
| case <-ctx.Done(): | |
| return fmt.Errorf("%w: last error: %v", ctx.Err(), lastErr) | |
| case <-time.After(interval): | |
| } | |
| } | |
| } |
| func dockerPublishedHost() string { | ||
| raw := os.Getenv("DOCKER_HOST") | ||
| if raw == "" { | ||
| return "127.0.0.1" | ||
| } | ||
|
|
||
| parsed, err := url.Parse(raw) | ||
| if err != nil { | ||
| return "127.0.0.1" | ||
| } | ||
|
|
||
| switch parsed.Scheme { | ||
| case "", "unix", "npipe": | ||
| return "127.0.0.1" | ||
| default: | ||
| if host := parsed.Hostname(); host != "" { | ||
| return host | ||
| } | ||
| return "127.0.0.1" | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
current_context=$(docker context show)
current_host=$(docker context inspect "$current_context" --format '{{ (index .Endpoints "docker").Host }}')
printf 'docker context: %s\n' "$current_context"
printf 'context host: %s\n' "$current_host"
printf 'DOCKER_HOST=%s\n' "${DOCKER_HOST-<unset>}"
if [[ -z "${DOCKER_HOST-}" && "$current_host" != unix://* && "$current_host" != npipe://* ]]; then
echo "dockerPublishedHost() would incorrectly return 127.0.0.1 here"
exit 1
fiRepository: ENTERPILOT/GoModel
Length of output: 108
🏁 Script executed:
cd tests/integration && wc -l docker_test.goRepository: ENTERPILOT/GoModel
Length of output: 80
🏁 Script executed:
cat -n tests/integration/docker_test.go | head -150Repository: ENTERPILOT/GoModel
Length of output: 4559
🏁 Script executed:
rg "dockerPublishedHost" tests/integration/Repository: ENTERPILOT/GoModel
Length of output: 409
🏁 Script executed:
rg "docker context" .Repository: ENTERPILOT/GoModel
Length of output: 44
🏁 Script executed:
rg -i "docker.context|remote.daemon" tests/integration/Repository: ENTERPILOT/GoModel
Length of output: 44
🏁 Script executed:
cat -n tests/integration/main_test.go | head -100Repository: ENTERPILOT/GoModel
Length of output: 2980
🏁 Script executed:
cat -n tests/integration/main_test.go | tail -50Repository: ENTERPILOT/GoModel
Length of output: 1604
🏁 Script executed:
git log --oneline tests/integration/docker_test.go | head -10Repository: ENTERPILOT/GoModel
Length of output: 111
🏁 Script executed:
fd -t f "README" tests/integration/ -o -path "*.md" tests/integration/Repository: ENTERPILOT/GoModel
Length of output: 178
🏁 Script executed:
cat -n README.md | grep -A 20 -B 5 -i "docker\|integration\|test"Repository: ENTERPILOT/GoModel
Length of output: 8790
Resolve the active Docker context before falling back to localhost.
When contributors use docker context to target a remote daemon, DOCKER_HOST is often unset. In that case dockerPublishedHost() returns 127.0.0.1, causing PostgreSQL and MongoDB connection URLs to point at the wrong machine. The function should read the active docker context via docker context inspect when DOCKER_HOST is unset to support remote daemon configurations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/docker_test.go` around lines 109 - 128,
dockerPublishedHost() currently returns 127.0.0.1 when DOCKER_HOST is empty,
which breaks remote docker contexts; update it to, when raw == "", run `docker
context inspect` for the active context (e.g., `docker context inspect <name>`
or `docker context inspect` without args) and parse the JSON to extract the
docker endpoint host (look under the Endpoints -> docker -> Host or similar
field), use that hostname if present, and only fall back to "127.0.0.1" on error
or if no host found; ensure you handle command errors, JSON parsing errors, and
keep existing parsing for DOCKER_HOST in dockerPublishedHost().
Summary
github.com/docker/dockertransitive dependency from the root module by droppingtestcontainers-goSecurity context
Dependabot flagged the root module for:
The repo was pulling
github.com/docker/docker v28.5.2+incompatibleonly through integration-test dependencies. This PR removes that path from the rootgo.modinstead of forcing an incompatible Docker SDK upgrade.Verification
go test ./internal/... ./config/...go test -v -tags=integration -timeout=10m ./tests/integration/...Summary by CodeRabbit
Chores
Documentation