fix: increase VERIFY_TIMEOUT_MS to 2 minutes for local LLMs#29062
fix: increase VERIFY_TIMEOUT_MS to 2 minutes for local LLMs#29062A404coder wants to merge 4 commits intoopenclaw:mainfrom
Conversation
Local LLMs running on consumer hardware (e.g., Qwen3 14B on laptop GPU) often need more than 10 seconds to generate a verification response. Increasing timeout to 2 minutes ensures these models have enough time to complete verification without failing. Fixes openclaw#28972
Greptile SummaryIncreased
Confidence Score: 4/5
Last reviewed commit: 47fb52e |
Additional Comments (1)
Prompt To Fix With AIThis is a comment left during a code review.
Path: src/commands/onboard-custom.test.ts
Line: 223
Comment:
update test to match new timeout value (was 10000ms, now 120000ms)
```suggestion
await vi.advanceTimersByTimeAsync(120000);
```
How can I resolve this? If you propose a fix, please make it concise. |
|
Fixes #28972 |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 47fb52e580
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/commands/onboard-custom.ts
Outdated
| const DEFAULT_CONTEXT_WINDOW = 4096; | ||
| const DEFAULT_MAX_TOKENS = 4096; | ||
| const VERIFY_TIMEOUT_MS = 10000; | ||
| const VERIFY_TIMEOUT_MS = 120000; // 2 minutes for local LLMs |
There was a problem hiding this comment.
Scope verification timeout to local endpoints
This timeout is now global for all custom API verification requests, not just local LLMs, so onboarding can block for much longer on bad or unreachable remote URLs. In the unknown compatibility path we probe OpenAI and then Anthropic sequentially, which means an endpoint that hangs instead of failing fast can now stall the wizard for up to ~4 minutes per attempt (2 minutes × 2 probes), a substantial regression from the previous ~20-second worst case.
Useful? React with 👍 / 👎.
- Add isLocalEndpoint() to detect localhost/private IP addresses - Use 120s timeout for local LLMs (need more time) - Keep 10s timeout for remote APIs (fail fast on unreachable URLs) - Prevents up to 4-minute wait on bad remote endpoints Addresses review feedback on openclaw#29062
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9874d1d151
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/commands/onboard-custom.ts
Outdated
| host === "127.0.0.1" || | ||
| host === "::1" || | ||
| host.startsWith("192.168.") || | ||
| host.startsWith("10.") || | ||
| host.startsWith("172.16.") || |
There was a problem hiding this comment.
Expand local host detection to full local IP formats
The new local-endpoint classifier misses common local addresses, so verification can incorrectly fall back to the 10s remote timeout and fail slow local models. In particular, new URL("http://[::1]:11434").hostname is [::1], so the host === "::1" check never matches, and the 172.16. prefix only covers one subnet of the RFC1918 172.16.0.0/12 range (excluding 172.17-172.31, which are common for Docker/VM networking).
Useful? React with 👍 / 👎.
- Normalize IPv6 hostname by removing brackets ([::1] -> ::1) - Expand 172.16.0.0/12 regex to cover 172.16-172.31 (Docker/VM networks) - Addresses Codex review feedback
|
Thank you for the contribution and for pushing this forward. We consolidated this fix path in #27380 so we can land one complete solution for onboarding custom-provider verification reliability with full test coverage. Closing this as superseded, with appreciation. |
Summary
Increases the model verification timeout from 10 seconds to 2 minutes (120 seconds) to accommodate local LLMs running on consumer hardware.
Problem
Local LLMs running on laptops with small GPUs (e.g., Qwen3 14B with llama.cpp) often take more than 10 seconds to generate a verification response. The current 10-second timeout causes the verification step to fail unnecessarily.
Solution
Change
VERIFY_TIMEOUT_MSfrom 10000ms (10s) to 120000ms (2 minutes).This change:
Testing
Fixes #28972