Skip to content

11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586

Open
EaCognitive wants to merge 2 commits intoopenai:mainfrom
EaCognitive:submission/hadamard-ve128-quip-lite
Open

11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean)#586
EaCognitive wants to merge 2 commits intoopenai:mainfrom
EaCognitive:submission/hadamard-ve128-quip-lite

Conversation

@EaCognitive
Copy link
Copy Markdown

@EaCognitive EaCognitive commented Mar 23, 2026

11L + Hadamard Rotation + VE128 + cuDNN SDPA

val_bpb: 1.1365 (sliding window stride=64, 3-seed mean, std 0.0005) | ~15.6 MB | 8xH100 SXM, 600s

3-Seed Results

Seed Steps Pre-quant Sliding BPB Artifact Compression
1337 8098 1.1512 1.1364 15,618,718 1.75x
42 8102 1.1513 1.1361 15,629,540 1.75x
2024 7960 1.1521 1.1370 15,600,361 1.76x

Technique: Data-Free Hadamard Rotation for Int6 Quantization

Walsh-Hadamard rotation applied to weight matrices before int6 per-row quantization. The orthogonal rotation spreads outlier values uniformly, improving zstd compression from 1.70x to 1.76x and reducing quantization gap from 0.0093 to 0.0084 BPB.

This technique is data-free: no calibration samples, no training data access at eval time. The rotation matrix is deterministic from the weight dimension.

No other open or merged PR uses rotation-based quantization.

Compression Enables Architecture

The 0.06x compression improvement recovers 530KB of artifact headroom within the 16MB budget, directly enabling Shared Value Embeddings (VE128 on layers 9-10) which previously overflowed at 44KB headroom.

Ablation

Config Sliding BPB Compression Headroom Quant Gap
Baseline (no Hadamard, no VE) 1.1372 1.70x 44KB 0.0093
+ Hadamard rotation 1.1377 1.78x 712KB 0.0091
+ VE128 (enabled by headroom) 1.1365 1.76x 400KB 0.0084

Findings

  • Hadamard rotation and GPTQ are substitutes at int6 precision. Full GPTQ (actorder + Cholesky) provides zero additional improvement when Hadamard rotation is present. Tested 3 times with identical result.
  • CPU parameter probe guided hyperparameter selection across 9.5M configurations, reducing GPU compute by ~84%.
  • No TTT. No training data access at eval time.

Architecture

11 layers, 512-dim, 8 heads (4 KV heads, GQA), MLP 3x relu-squared, XSA on last 4 layers, Partial RoPE (16/64), LN Scale, U-Net skip connections, SmearGate, BigramHash(2048), EMA 0.997, cuDNN SDPA. Muon lr=0.025 + AdamW lr=0.035. Warmdown 3500 steps (cosine).

Erick Aleman | EA Cognitive | www.eacognitive.com | github.com/eacognitive

Data-free Walsh-Hadamard rotation for int6 quantization. Improves
zstd compression from 1.70x to 1.76x, recovering 530KB of artifact
headroom that enables Shared Value Embeddings (VE128).

No calibration data. No training data access at eval time. No TTT.
3-seed mean: 1.1365 +/- 0.0005 BPB. All artifacts under 16MB.

Erick Aleman | EA Cognitive | www.eacognitive.com
@EaCognitive EaCognitive force-pushed the submission/hadamard-ve128-quip-lite branch from 82e3375 to 0915466 Compare March 24, 2026 20:03
@EaCognitive EaCognitive changed the title Record: 11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365) 11L + Hadamard Rotation + VE128 + cuDNN SDPA (val_bpb: 1.1365, 3-seed mean) Mar 24, 2026
@EaCognitive
Copy link
Copy Markdown
Author

This submission validates our current approach. Ongoing research has identified several highly promising extensions from recent literature that we're ready to test. GPU grant has been submitted. We welcome any feedback. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant