Skip to content

fix: increase VERIFY_TIMEOUT_MS to 2 minutes for local LLMs#29062

Closed
A404coder wants to merge 4 commits intoopenclaw:mainfrom
A404coder:fix/issue-28972
Closed

fix: increase VERIFY_TIMEOUT_MS to 2 minutes for local LLMs#29062
A404coder wants to merge 4 commits intoopenclaw:mainfrom
A404coder:fix/issue-28972

Conversation

@A404coder
Copy link

Summary

Increases the model verification timeout from 10 seconds to 2 minutes (120 seconds) to accommodate local LLMs running on consumer hardware.

Problem

Local LLMs running on laptops with small GPUs (e.g., Qwen3 14B with llama.cpp) often take more than 10 seconds to generate a verification response. The current 10-second timeout causes the verification step to fail unnecessarily.

Solution

Change VERIFY_TIMEOUT_MS from 10000ms (10s) to 120000ms (2 minutes).

This change:

  • Allows local models sufficient time to respond
  • Does not negatively impact cloud API users (just a longer fallback timeout)
  • Improves user experience for self-hosted model users

Testing

  • The change is a simple constant modification
  • No functional behavior changes other than timeout duration

Fixes #28972

Local LLMs running on consumer hardware (e.g., Qwen3 14B on laptop GPU)
often need more than 10 seconds to generate a verification response.
Increasing timeout to 2 minutes ensures these models have enough time
to complete verification without failing.

Fixes openclaw#28972
@openclaw-barnacle openclaw-barnacle bot added commands Command implementations size: XS labels Feb 27, 2026
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 27, 2026

Greptile Summary

Increased VERIFY_TIMEOUT_MS from 10 seconds to 2 minutes to accommodate local LLMs running on consumer hardware, which often take longer to generate verification responses.

  • Simple constant change from 10000ms to 120000ms in src/commands/onboard-custom.ts:21
  • Does not negatively impact cloud API users (just provides a longer fallback timeout)
  • Fixes issue [Bug]: VERIFY_TIMEOUT_MS in onboard-custom.ts is too short for local LLMs. #28972 where local models (e.g., Qwen3 14B with llama.cpp) were timing out during verification
  • Important: test in src/commands/onboard-custom.test.ts:223 needs to be updated to advance timers by 120000ms instead of 10000ms to match the new timeout value

Confidence Score: 4/5

  • Safe to merge after updating the test timeout value
  • The change is straightforward and solves a real problem for local LLM users. The only issue is that a test timeout value needs to be updated to match the new constant (from 10000ms to 120000ms in src/commands/onboard-custom.test.ts:223). Otherwise, the change is well-justified and doesn't introduce any logical errors or security concerns.
  • Test file src/commands/onboard-custom.test.ts requires attention - the timeout test needs to be updated

Last reviewed commit: 47fb52e

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 27, 2026

Additional Comments (1)

src/commands/onboard-custom.test.ts
update test to match new timeout value (was 10000ms, now 120000ms)

    await vi.advanceTimersByTimeAsync(120000);
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/onboard-custom.test.ts
Line: 223

Comment:
update test to match new timeout value (was 10000ms, now 120000ms)

```suggestion
    await vi.advanceTimersByTimeAsync(120000);
```

How can I resolve this? If you propose a fix, please make it concise.

@A404coder
Copy link
Author

Fixes #28972

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 47fb52e580

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

const DEFAULT_CONTEXT_WINDOW = 4096;
const DEFAULT_MAX_TOKENS = 4096;
const VERIFY_TIMEOUT_MS = 10000;
const VERIFY_TIMEOUT_MS = 120000; // 2 minutes for local LLMs

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Scope verification timeout to local endpoints

This timeout is now global for all custom API verification requests, not just local LLMs, so onboarding can block for much longer on bad or unreachable remote URLs. In the unknown compatibility path we probe OpenAI and then Anthropic sequentially, which means an endpoint that hangs instead of failing fast can now stall the wizard for up to ~4 minutes per attempt (2 minutes × 2 probes), a substantial regression from the previous ~20-second worst case.

Useful? React with 👍 / 👎.

- Add isLocalEndpoint() to detect localhost/private IP addresses
- Use 120s timeout for local LLMs (need more time)
- Keep 10s timeout for remote APIs (fail fast on unreachable URLs)
- Prevents up to 4-minute wait on bad remote endpoints

Addresses review feedback on openclaw#29062
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9874d1d151

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +33 to +37
host === "127.0.0.1" ||
host === "::1" ||
host.startsWith("192.168.") ||
host.startsWith("10.") ||
host.startsWith("172.16.") ||

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Expand local host detection to full local IP formats

The new local-endpoint classifier misses common local addresses, so verification can incorrectly fall back to the 10s remote timeout and fail slow local models. In particular, new URL("http://[::1]:11434").hostname is [::1], so the host === "::1" check never matches, and the 172.16. prefix only covers one subnet of the RFC1918 172.16.0.0/12 range (excluding 172.17-172.31, which are common for Docker/VM networking).

Useful? React with 👍 / 👎.

- Normalize IPv6 hostname by removing brackets ([::1] -> ::1)
- Expand 172.16.0.0/12 regex to cover 172.16-172.31 (Docker/VM networks)
- Addresses Codex review feedback
@philipp-spiess
Copy link
Member

Thank you for the contribution and for pushing this forward.

We consolidated this fix path in #27380 so we can land one complete solution for onboarding custom-provider verification reliability with full test coverage.

Closing this as superseded, with appreciation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

commands Command implementations size: XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: VERIFY_TIMEOUT_MS in onboard-custom.ts is too short for local LLMs.

2 participants