Skip to content

Add 11L RotaryFix + LegalTTT + BIGRAM3072 — val_bpb 1.11869 (3-seed m…#714

Open
Upsalla wants to merge 2 commits intoopenai:mainfrom
Upsalla:submission/11L-rotaryfix-legalttt-bigram3072
Open

Add 11L RotaryFix + LegalTTT + BIGRAM3072 — val_bpb 1.11869 (3-seed m…#714
Upsalla wants to merge 2 commits intoopenai:mainfrom
Upsalla:submission/11L-rotaryfix-legalttt-bigram3072

Conversation

@Upsalla
Copy link
Copy Markdown

@Upsalla Upsalla commented Mar 25, 2026

…ean)

Key changes vs. existing entries:

  • Rotary NTK-Scaling bug fix: train_seq_len=2048 now correctly propagated to both base_model and eval_model (previously hardcoded to 1024)
  • BIGRAM vocabulary size 3072 (vs 1536)
  • Late QAT threshold 0.57 for ~1700 QAT steps (vs ~525 steps at 0.15)
  • torch.no_grad() instead of torch.inference_mode() in TTT scoring phase to prevent Autograd graph corruption when RoPE caches cross phase boundary

Results (8xH100 SXM, 600s training + ~425s TTT):
Seed 1337: legal_ttt_bpb = 1.11877
Seed 42: legal_ttt_bpb = 1.11836
Seed 2025: legal_ttt_bpb = 1.11893
Mean: 1.11869 ± 0.00024

…ean)

Key changes vs. existing entries:
- Rotary NTK-Scaling bug fix: train_seq_len=2048 now correctly propagated
  to both base_model and eval_model (previously hardcoded to 1024)
- BIGRAM vocabulary size 3072 (vs 1536)
- Late QAT threshold 0.57 for ~1700 QAT steps (vs ~525 steps at 0.15)
- torch.no_grad() instead of torch.inference_mode() in TTT scoring phase
  to prevent Autograd graph corruption when RoPE caches cross phase boundary

Results (8xH100 SXM, 600s training + ~425s TTT):
  Seed 1337: legal_ttt_bpb = 1.11877
  Seed 42:   legal_ttt_bpb = 1.11836
  Seed 2025: legal_ttt_bpb = 1.11893
  Mean: 1.11869 ± 0.00024
The previous commit only included records/ changes but left train_gpt.py
as the unmodified baseline script. This commit adds the actual modified
training script used to achieve val_bpb 1.11869 (3-seed mean).

Features included:
- Legal Score-First TTT (test-time training, causal, barrier-synced)
- BigramHash 3072 vocabulary
- LeakyReLU(0.5)^2 MLP activation
- ValueEmbedding (VE128) on layers 9-10
- XSA (Cross-Sequence Attention) on last 4 layers
- EMA (decay=0.997) + Tight SWA (every 50 steps)
- Late QAT (GPTQ-lite int6 + lzma), threshold=0.57
- Sliding Window Eval (stride=64)
- Partial RoPE (16/64 dims), train_seq_len=2048 (Rotary NTK bug fixed)
- LN Scale (1/sqrt(layer+1))
- Parameter Banking + Parallel Muon optimizer

MD5: 5926353668cf98f9c97b2ec171b59818
theLightArchitect added a commit to theLightArchitect/parameter-golf that referenced this pull request Mar 27, 2026
Four major additions to the Kuda Architecture:

1. Hedge Mixer (5-expert, eval-time): Multiplicative Weights Update mixing
   neural + unigram + bigram + trigram + entropy experts. Based on online
   learning theory (Freund & Schapire 1997). Same principle as PAQ/CMIX
   world-best compressors. Expected -0.065 BPB (PR openai#700 validated).

2. CROWN-Q warmdown penalty: lambda * mean(w^2 * delta^2 / 12) pushes
   weights into flat minima that survive quantization. delta^2/12 is the
   uniform quantization noise variance. w^2 is diagonal Fisher proxy.
   Applied during warmdown only. From PR openai#693.

3. RoPE NTK fix: Propagate train_seq_len to all blocks' Rotary modules.
   Prevents positional encoding mismatch between train (2048) and eval.
   From PR openai#714 — produced tightest seed variance in competition.

4. TTT infrastructure: Score-first eval with SGD adaptation on scored
   tokens. FiLM-only TTT planned for Kuda recurrence mode.

All features verified locally: forward/backward, CROWN-Q penalty,
5-expert Hedge mixing, Hedge weight updates, RoPE propagation.
Script now 1,559 lines.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant