Add LeakyReLU² + 4ep Legal TTT submission#1039
Open
yufengli-oai wants to merge 3 commits intomainfrom
Open
Conversation
icryo
added a commit
to icryo/parameter-golf
that referenced
this pull request
Mar 29, 2026
PR openai#1039 claims 1.1184 BPB with just TTT_LR=0.0025, TTT_EPOCHS=4 (vs SOTA's 0.002/3ep). This is a potential record from a 2-line change. TTT sweep now tests 4 configs: A: SOTA (lr=0.002, 3ep) — baseline reproduction B: PR openai#1039 (lr=0.0025, 4ep) — claimed 1.1184 BPB C: 5 epochs (lr=0.002, 5ep) — deeper adaptation D: Aggressive (lr=0.003, 4ep) — higher LR + more epochs Also from PR review: - DeltaNet "Medusa" achieves 0.77 BPB single seed (different arch) - Bayesian posterior packets show early TTT chunks hit 1.109 then drift - Block 7 c_k has kurtosis 11.9 (quantization outlier) - AdamW TTT confirmed catastrophic (SGD is correct) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
icryo
added a commit
to icryo/parameter-golf
that referenced
this pull request
Mar 29, 2026
PR openai#1043 found early TTT chunks achieve 1.109 BPB (below SOTA!) but accumulated SGD updates cause drift to 1.126 by late chunks. Fix: periodically reset model weights to the original checkpoint. This prevents catastrophic drift while preserving local adaptation. Implementation: - TTT_RESET_EVERY=N: reset weights every N chunks (0=disabled) - Resets both weights and optimizer momentum state - Uses in-place copy (no reallocation, parameter references preserved) H100 sweep now tests 11 configurations: 6 temperatures × sliding eval 5 TTT configs: A: SOTA baseline (lr=0.002, 3ep) B: PR openai#1039 (lr=0.0025, 4ep) C: 5 epochs (lr=0.002, 5ep) D: PR openai#1039 + reset/100 (anti-drift) E: PR openai#1039 + reset/50 (anti-drift) If early chunks consistently hit 1.109 and reset prevents drift, the mean across all chunks could drop from 1.119 toward 1.110-1.114. That's record territory. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
icryo
added a commit
to icryo/parameter-golf
that referenced
this pull request
Mar 29, 2026
Competition moved while we were experimenting locally: PR openai#634: 1.1178 BPB (Full GPTQ + XSA-all + selective pruning) PR openai#1060: 1.1122 BPB (+ coprime loader + BigramHash 2816) Our contribution: TTT periodic reset on the PR openai#1060 base. PR openai#1060 found TTT unnecessary with Full GPTQ, but they didn't test TTT with anti-drift reset. If TTT drift was the reason it stopped helping, reset could unlock further gains. Files: train_gpt_ours.py — PR openai#1060 + TTT reset mechanism train_gpt_pr634.py — Full GPTQ reference (for study) train_gpt_pr1060.py — Original PR openai#1060 (for comparison) run_h100.sh — Train once, sweep 4 TTT configs TTT configs tested: A: SOTA (lr=0.002, 3ep) — baseline TTT B: PR openai#1039 (lr=0.0025, 4ep) — tuned TTT C: B + reset/100 — anti-drift, moderate D: B + reset/50 — anti-drift, aggressive Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…11l-sdpa # Conflicts: # README.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
A solution generated by codex, not sure about its performance
Validation