Webui - change setText command from parent window to also send the message.#13309
Webui - change setText command from parent window to also send the message.#13309ngxson merged 7 commits intoggml-org:masterfrom
Conversation
igardev
commented
May 5, 2025
- setText command from parent window (for llama-vscode) now sends the message automatically.
- Upgrade packages versions to fix vulnerabilities with "npm audit fix" command.
…dit fix" command." This reverts commit 67687b7.
| textarea.focus(); | ||
| if (onSend && data?.text) { | ||
| // Use setTimeout to ensure state updates are processed | ||
| setTimeout(() => { | ||
| onSend(); | ||
| }, 50); | ||
| } |
There was a problem hiding this comment.
Tbh I don't quite like this approach because it makes the logic looks kinda circular dependency.
A cleaner approach is to simply extend useChatTextarea to temporary hold a callback let's say textarea.onSubmit, then call it here:
| textarea.focus(); | |
| if (onSend && data?.text) { | |
| // Use setTimeout to ensure state updates are processed | |
| setTimeout(() => { | |
| onSend(); | |
| }, 50); | |
| } | |
| textarea.focus(); | |
| textarea.onSubmit(); |
There was a problem hiding this comment.
I agree. It is cleaner.
| }; | ||
|
|
||
| const sendNewMessage = async () => { | ||
| const sendNewMessage = useCallback(async () => { |
There was a problem hiding this comment.
no need useCallback here. We can simply update textarea.onSubmit = sendNewMessage each time
|
I'll push a commit to implement my suggestions |
|
@igardev can you test this? |
| setTimeout(() => { | ||
| textarea.refOnSubmit.current?.(); | ||
| }, 10); // wait for setExtraContext to finish |
There was a problem hiding this comment.
Ok seems like if this is called right away, setExtraContext hasn't yet finished, hence why you don't see it. Can you give it a try now?
There was a problem hiding this comment.
It works now. Thanks!
* origin/master: (27 commits) llama : fix build_ffn without gate (ggml-org#13336) CUDA: fix bad asserts for partial offload (ggml-org#13337) convert : qwen2/3moe : set yarn metadata if present (ggml-org#13331) CUDA: fix --split-mode row for MMQ (ggml-org#13323) gguf-py : avoid requiring pyside6 for other scripts (ggml-org#13036) CUDA: fix logic for clearing padding with -ngl 0 (ggml-org#13320) sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (ggml-org#13264) server : Webui - change setText command from parent window to also send the message. (ggml-org#13309) mtmd : rename llava directory to mtmd (ggml-org#13311) clip : fix confused naming ffn_up and ffn_down (ggml-org#13290) convert : bailingmoe : set yarn metadata if present (ggml-org#13312) SYCL: Disable mul_mat kernels for noncontiguous tensor b (ggml-org#13308) mtmd : add C public API (ggml-org#13184) rpc : use backend registry, support dl backends (ggml-org#13304) ggml : activate s390x simd for Q3_K (ggml-org#13301) llava/mtmd : fixes to fully support dl backends (ggml-org#13303) llama : build windows releases with dl backends (ggml-org#13220) CUDA: fix race condition in MMQ stream-k fixup (ggml-org#13299) CUDA: fix race condition in MMQ ids_dst (ggml-org#13294) vulkan: Additional type support for unary, binary, and copy (ggml-org#13266) ...
…nd the message. (ggml-org#13309) * setText command from parent window for llama-vscode now sends the message automatically. * Upgrade packages versions to fix vulnerabilities with "npm audit fix" command. * Fix code formatting. * Add index.html.gz changes. * Revert "Upgrade packages versions to fix vulnerabilities with "npm audit fix" command." This reverts commit 67687b7. * easier approach * add setTimeout --------- Co-authored-by: igardev <ivailo.gardev@akros.ch> Co-authored-by: Xuan Son Nguyen <son@huggingface.co>

