Record: Seq2048 training + eval (val_bpb=1.2101)#136
Open
ibarrajo wants to merge 1 commit intoopenai:mainfrom
Open
Record: Seq2048 training + eval (val_bpb=1.2101)#136ibarrajo wants to merge 1 commit intoopenai:mainfrom
ibarrajo wants to merge 1 commit intoopenai:mainfrom
Conversation
Training and evaluating at sequence length 2048 instead of 1024. No architecture changes — same 9-layer 512-dim baseline. 8xH100 SXM, 11,417 steps in 600s, 15.87MB artifact. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ThomAub
pushed a commit
to ThomAub/parameter-golf
that referenced
this pull request
Mar 22, 2026
Many TTT submissions (openai#136, openai#152, openai#254, openai#264, openai#338, openai#398, openai#417, openai#421, openai#442) flagged as potentially invalid for adapting on eval tokens BEFORE scoring them. Added correct score-then-adapt protocol with implementation guide. https://claude.ai/code/session_01M5XTtyz2Zdq5BDeh9qNn9y
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Seq2048 Training + Eval (val_bpb: 1.2101)
val_bpb: 1.2101 (post-quant int8+zlib roundtrip) | 15.87 MB | 8xH100 SXM, 11,417 steps in 600s
Approach
One change to the baseline: training and evaluating at sequence length 2048 instead of 1024. The model learns real long-range dependencies during training rather than relying on RoPE position extrapolation at eval time.
Why This Works
At seq1024, the model only sees 1024-token windows during training. At eval time with longer context, the model extrapolates RoPE positions it never trained on — the attention patterns are untested. Training at seq2048 means the model has practiced using 2048 tokens of context, so eval at 2048 is interpolation, not extrapolation.
Each training step still processes the same total tokens (524K) — just in 256 sequences of 2048 instead of 512 sequences of 1024. Step time is identical.
Results
Development Context
This was validated through systematic experimentation:
Command
🤖 Generated with Claude Code