11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586
Open
EaCognitive wants to merge 2 commits intoopenai:mainfrom
Open
11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586EaCognitive wants to merge 2 commits intoopenai:mainfrom
EaCognitive wants to merge 2 commits intoopenai:mainfrom
Conversation
e32e92a to
82e3375
Compare
Data-free Walsh-Hadamard rotation for int6 quantization. Improves zstd compression from 1.70x to 1.76x, recovering 530KB of artifact headroom that enables Shared Value Embeddings (VE128). No calibration data. No training data access at eval time. No TTT. 3-seed mean: 1.1365 +/- 0.0005 BPB. All artifacts under 16MB. Erick Aleman | EA Cognitive | www.eacognitive.com
82e3375 to
0915466
Compare
Author
|
This submission validates our current approach. Ongoing research has identified several highly promising extensions from recent literature that we're ready to test. GPU grant has been submitted. We welcome any feedback. Thank you. |
Erick Aleman | EA Cognitive | www.eacognitive.com
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
11L + Hadamard Rotation + VE128 + cuDNN SDPA
val_bpb: 1.1365 (sliding window stride=64, 3-seed mean, std 0.0005) | ~15.6 MB | 8xH100 SXM, 600s
3-Seed Results
Technique: Data-Free Hadamard Rotation for Int6 Quantization
Walsh-Hadamard rotation applied to weight matrices before int6 per-row quantization. The orthogonal rotation spreads outlier values uniformly, improving zstd compression from 1.70x to 1.76x and reducing quantization gap from 0.0093 to 0.0084 BPB.
This technique is data-free: no calibration samples, no training data access at eval time. The rotation matrix is deterministic from the weight dimension.
No other open or merged PR uses rotation-based quantization.
Compression Enables Architecture
The 0.06x compression improvement recovers 530KB of artifact headroom within the 16MB budget, directly enabling Shared Value Embeddings (VE128 on layers 9-10) which previously overflowed at 44KB headroom.
Ablation
Findings
Architecture
11 layers, 512-dim, 8 heads (4 KV heads, GQA), MLP 3x relu-squared, XSA on last 4 layers, Partial RoPE (16/64), LN Scale, U-Net skip connections, SmearGate, BigramHash(2048), EMA 0.997, cuDNN SDPA. Muon lr=0.025 + AdamW lr=0.035. Warmdown 3500 steps (cosine).
Erick Aleman | EA Cognitive | www.eacognitive.com | github.com/eacognitive