Skip to content

build : remove LLAMA_HTTPLIB option#19623

Merged
ngxson merged 1 commit intoggml-org:masterfrom
angt:build-remove-llama_httplib-option
Feb 15, 2026
Merged

build : remove LLAMA_HTTPLIB option#19623
ngxson merged 1 commit intoggml-org:masterfrom
angt:build-remove-llama_httplib-option

Conversation

@angt
Copy link
Contributor

@angt angt commented Feb 14, 2026

This option was introduced as a workaround because cpp-httplib could not build on visionOS. Since it has been fixed and now compiles on all platforms, we can remove it and simplify many things.

This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
@github-actions github-actions bot added build Compilation issues script Script related examples python python script changes server labels Feb 14, 2026
@ngxson ngxson merged commit 9e118b9 into ggml-org:master Feb 15, 2026
81 checks passed
@LostRuins
Copy link
Collaborator

FYI This breaks compilation on Win64devkit. httplib does not compile correctly there.

Example of errors:

undefined reference to `__imp_setsockopt'
undefined reference to `__imp_freeaddrinfo'
undefined reference to `__imp_WSACleanup'
undefined reference to `WSAPoll'
undefined reference to `__imp_shutdown'

I would strongly suggest not forcing httplib to be used on all builds. It is possible that there are other environments that it does not work either.

LostRuins added a commit to LostRuins/koboldcpp that referenced this pull request Feb 16, 2026
@ngxson
Copy link
Contributor

ngxson commented Feb 16, 2026

@LostRuins do you also build libcommon and llama-server?

It seems to me that httplib will always be built after this PR. We should only build httplib if any parts of the project depends on it.

option(LLAMA_TESTS_INSTALL "llama: install tests" ON)

# 3rd party libs
option(LLAMA_HTTPLIB "llama: httplib for downloading functionality" ON)
Copy link
Contributor

@ngxson ngxson Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably better to add a new option: LLAMA_COMMON_DOWNLOAD to enable / disable download functionality, cc @angt

Copy link
Contributor Author

@angt angt Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, or maybe just not including download in common by default?

@LostRuins
Copy link
Collaborator

@ngxson it's an unfortunate long dependency chain that requires httplib

I just want to build completion.exe -> so I build build completion.cpp -> which requires arg.cpp (for the launch params) -> which requires download.cpp (for definitions of common_download_file_single -> which now requires httplib.cpp (previously is ok because I can disable LLAMA_USE_HTTPLIB)

So, how can I build completion.exe without httplib?

@angt
Copy link
Contributor Author

angt commented Feb 16, 2026

Hi @LostRuins, thanks for reporting the issue.

This seems to be specific to the Win64devkit build configuration, as cpp-httplib compiles and works on Windows.
I believe it is a very simple issue to fix, and might be related to this code:

if (WIN32 AND NOT MSVC)
    target_link_libraries(${TARGET} PRIVATE ws2_32)
endif()

@angt
Copy link
Contributor Author

angt commented Feb 16, 2026

Made a PR, let see if the CI is happy

@LostRuins
Copy link
Collaborator

I still think it would be better for the httplib library to be togglable. Some other platforms may not support it, or people might simply not want to link it - this forces bundling downloading and networking capabilities when they aren't really needed in many cases.

@ngxson
Copy link
Contributor

ngxson commented Feb 16, 2026

+1 for bringing back the ability to disable download functionalities at compile time.

@angt this is required because it doesn't make sense on certain use cases, for example wasm/emscripten or building it as for FFI binding. While it's technically compiles, it will become dead code

@LostRuins
Copy link
Collaborator

i agree with ngxson

@angt
Copy link
Contributor Author

angt commented Feb 16, 2026

Yes, of course, but still we need to fix the bug.
Then we can isolate the download code in a better way to be used only when it's required.

michaelneale added a commit to michaelneale/llama.cpp that referenced this pull request Feb 17, 2026
* upstream/master: (88 commits)
  ci : bump komac version (ggml-org#19682)
  build : link ws2_32 as PUBLIC on Windows (ggml-org#19666)
  build : cleanup library linking logic (ggml-org#19665)
  convert : add JoyAI-LLM-Flash (ggml-org#19651)
  perplexity: add proper batching (ggml-org#19661)
  common : inline functions (ggml-org#18639)
  ggml : make `ggml_is_view` as API (ggml-org#19539)
  model: Add support for Tiny Aya Models (ggml-org#19611)
  build : rework llama_option_depr to handle LLAMA_CURL (ggml-org#19658)
  Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (ggml-org#19591)
  models : deduplicate delta-net graphs for Qwen family (ggml-org#19597)
  graph : fix KQ mask, lora, cvec reuse checks (ggml-org#19644)
  ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel  (ggml-org#19132)
  sync : ggml
  ggml : bump version to 0.9.7 (ggml/1425)
  ggml : bump version to 0.9.6 (ggml/1423)
  cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization (ggml-org#19624)
  docs: update s390x build docs (ggml-org#19643)
  build : remove LLAMA_HTTPLIB option (ggml-org#19623)
  cmake : check if KleidiAI API has been fetched (ggml-org#19640)
  ...
liparetejas pushed a commit to liparetejas/llama.cpp that referenced this pull request Feb 23, 2026
This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
bartowski1182 pushed a commit to bartowski1182/llama.cpp that referenced this pull request Mar 2, 2026
This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
ArberSephirotheca pushed a commit to ArberSephirotheca/llama.cpp that referenced this pull request Mar 3, 2026
This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

build Compilation issues examples python python script changes script Script related server

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants