fix: increase onboard verify timeout to 120s for local LLMs#29043
fix: increase onboard verify timeout to 120s for local LLMs#29043Bartok9 wants to merge 1 commit intoopenclaw:mainfrom
Conversation
Local models running on modest hardware (e.g. Qwen3 14B on a laptop GPU) can take longer than 10s to generate a verification response. Increase timeout to 2 minutes to accommodate slower local setups without impacting the experience for faster models. Fixes openclaw#28972
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1f217f8a2c
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const DEFAULT_CONTEXT_WINDOW = 4096; | ||
| const DEFAULT_MAX_TOKENS = 4096; | ||
| const VERIFY_TIMEOUT_MS = 10000; | ||
| const VERIFY_TIMEOUT_MS = 120_000; |
There was a problem hiding this comment.
Separate probe timeout from verification timeout
VERIFY_TIMEOUT_MS is shared by both explicit verification and endpoint auto-detection probes, so raising it to 120s makes the unknown compatibility path wait up to ~4 minutes (OpenAI probe timeout + Anthropic probe timeout) before the user can retry when an endpoint hangs instead of failing fast. This is a significant UX regression from the previous ~20s worst case and can make onboarding appear frozen for invalid or firewalled URLs.
Useful? React with 👍 / 👎.
Greptile SummaryIncreased verification timeout from 10s to 120s to support slower local LLM setups during onboarding.
Confidence Score: 3/5
Last reviewed commit: 1f217f8 |
Additional Comments (1)
Prompt To Fix With AIThis is a comment left during a code review.
Path: src/commands/onboard-custom.test.ts
Line: 223
Comment:
Test won't trigger timeout anymore since `VERIFY_TIMEOUT_MS` is now 120000ms
```suggestion
await vi.advanceTimersByTimeAsync(120_000);
```
How can I resolve this? If you propose a fix, please make it concise. |
|
Thank you for the contribution and for pushing this forward. We consolidated this fix path in #27380 so we can land one complete solution for onboarding custom-provider verification reliability with full test coverage. Closing this as superseded, with appreciation. |
Summary
Increases the verification timeout from 10s to 120s to accommodate slower local LLM setups.
Problem
Local models running on modest hardware (e.g. Qwen3 14B on a laptop GPU with llama.cpp) can take longer than 10 seconds to generate a verification response, causing the onboarding to fail.
Solution
Increase
VERIFY_TIMEOUT_MSfrom 10,000ms to 120,000ms (2 minutes). This doesn't hurt faster models but allows slower local setups to complete verification.Fixes #28972
🤖 AI-assisted (Bartok via Clawdbot)