Skip to content

Windows fixes#31

Merged
ggerganov merged 5 commits intoggml-org:masterfrom
etra0:windows-fixes
Mar 12, 2023
Merged

Windows fixes#31
ggerganov merged 5 commits intoggml-org:masterfrom
etra0:windows-fixes

Conversation

@etra0
Copy link
Copy Markdown
Contributor

@etra0 etra0 commented Mar 12, 2023

This would be the initial PR to be able to compile stuff in Windows.

In particular, MSVC is very picky about the features you can use and you cannot.

With C++11

  • You cannot use designated initializers (when initializing a struct, you cannot specify the fields names)
  • You cannot use VLAs, so I changed it to a vector.

A PR for the CMake build system (as agreed in #22) will be separated.

These changes were tested with MSVC 19.34.31937.0 (VS Studio 2022) and in macOS 12.6 with Apple clang version 13.1.6

@ggerganov ggerganov mentioned this pull request Mar 12, 2023
@etra0 etra0 requested a review from ggerganov March 12, 2023 15:47
@ggerganov ggerganov merged commit eb062bb into ggml-org:master Mar 12, 2023
Hades32 pushed a commit to Hades32/llama.cpp that referenced this pull request Mar 21, 2023
theo77186 pushed a commit to theo77186/llama.cpp that referenced this pull request Oct 28, 2025
jesusmb1995 pushed a commit to jesusmb1995/llama.cpp that referenced this pull request Oct 30, 2025
SamuelOliveirads pushed a commit to SamuelOliveirads/llama.cpp that referenced this pull request Dec 29, 2025
Ref ggml-org#29

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
rururush pushed a commit to USTC-ADSL/llama.cpp that referenced this pull request Mar 16, 2026
* print build type

* wip

* print compiling flags

* wip

* wip
TheTom referenced this pull request in TheTom/llama-cpp-turboquant Mar 25, 2026
…gml-org#31

Block 128: PPL=165.6 (same as block 32)
Disabled Q rotation: PPL=165.6 (same)
Root cause: dynamic_cast fails for MoE hybrid memory context.
Q rotation and V inverse rotation never execute.

Co-Authored-By: tturney@psyguard.ai
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
TheTom referenced this pull request in TheTom/llama-cpp-turboquant Mar 25, 2026
…#30

ROOT CAUSE: pre-rotate-queries never executed because:
1. Q ne[0]=256 (GQA concatenated heads), rotation matrix ne[0]=128
2. mctx dynamic_cast failed for MoE hybrid memory

FIX: put inverse WHT rotation back in dequantize_full_block.
This is slower (10.7 tok/s vs 77.7) but produces CORRECT results.

PERPLEXITY RESULTS:
- f16:     6.121
- q8_0:    6.111
- q4_0:    6.142
- turbo3:  6.194 (+1.2% vs q8_0) ✅

The speed optimization (pre-rotate-queries) needs to be reimplemented
to work with GQA head layout and hybrid memory types.

Co-Authored-By: tturney@psyguard.ai
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
TheTom referenced this pull request in TheTom/llama-cpp-turboquant Mar 26, 2026
…31

Block 128: PPL=165.6 (same as block 32)
Disabled Q rotation: PPL=165.6 (same)
Root cause: dynamic_cast fails for MoE hybrid memory context.
Q rotation and V inverse rotation never execute.

Co-Authored-By: tturney@psyguard.ai
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
TheTom referenced this pull request in TheTom/llama-cpp-turboquant Mar 26, 2026
…#30

ROOT CAUSE: pre-rotate-queries never executed because:
1. Q ne[0]=256 (GQA concatenated heads), rotation matrix ne[0]=128
2. mctx dynamic_cast failed for MoE hybrid memory

FIX: put inverse WHT rotation back in dequantize_full_block.
This is slower (10.7 tok/s vs 77.7) but produces CORRECT results.

PERPLEXITY RESULTS:
- f16:     6.121
- q8_0:    6.111
- q4_0:    6.142
- turbo3:  6.194 (+1.2% vs q8_0) ✅

The speed optimization (pre-rotate-queries) needs to be reimplemented
to work with GQA head layout and hybrid memory types.

Co-Authored-By: tturney@psyguard.ai
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
spiritbuun referenced this pull request in spiritbuun/llama-cpp-turboquant-cuda Mar 27, 2026
- turbo4 K+V results on Qwen3.5-27B (-0.32% vs q8_0) and Qwen3-14B (+6.3%)
- Sparse V dequant benchmarks: MoE native dequant +10.9% at 8K
- Gemma-3 turbo3 results post-iSWA fix (+3.3%)
- KVLinC no-K-rotation negative result
- Speculative decoding negative result
- CUDA 13.2 compatibility verified
- Experiments TheTom#31, TheTom#39, TheTom#42, TheTom#45, ggml-org#49, ggml-org#50, ggml-org#51 status updates

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
didlawowo pushed a commit to didlawowo/llama.cpp that referenced this pull request Mar 27, 2026
…ml-org#31 ggml-org#30

ROOT CAUSE: pre-rotate-queries never executed because:
1. Q ne[0]=256 (GQA concatenated heads), rotation matrix ne[0]=128
2. mctx dynamic_cast failed for MoE hybrid memory

FIX: put inverse WHT rotation back in dequantize_full_block.
This is slower (10.7 tok/s vs 77.7) but produces CORRECT results.

PERPLEXITY RESULTS:
- f16:     6.121
- q8_0:    6.111
- q4_0:    6.142
- turbo3:  6.194 (+1.2% vs q8_0) ✅

The speed optimization (pre-rotate-queries) needs to be reimplemented
to work with GQA head layout and hybrid memory types.

Co-Authored-By: tturney@psyguard.ai
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants