Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160#122
Open
mtybadger wants to merge 2 commits intoopenai:mainfrom
Open
Record: Sliding Window Eval, 2048 Vocab Size, fp16 embeddings, SWA, NorMuon, FA3; mean_val_bpb:1.160#122mtybadger wants to merge 2 commits intoopenai:mainfrom
mtybadger wants to merge 2 commits intoopenai:mainfrom
Conversation
This was referenced Mar 25, 2026
Closed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Update 3/20: Added two more 8xH100 runs with
SEED=41,102; the 3-run mean isfinal_w6e16_zstd22_roundtrip_exact val_bpb:1.17007988andfinal_sliding_window_exact val_bpb:1.16027254.Day 2! This record brings the ideas from my last work (#78), which was high vocab size, NorMuon and mixed int6/int8 quantization, up to the frontier by copying a bunch of other people! Specifically, I take the STE and SWA ideas from @vmfunc (#89), the sliding window eval with seqlen=1024 and stride=64 from #50 @mattqlf, and #65 @aquariouseworkman, and the Momentum/LR tuning from #52 @spokane-way, #61 @saml212. I also use FA3, which decreases step time by about 10ms - a total free lunch! N.b. I'm not sure if importing the FA3 library violates the 16MB code requirement, since it's been unclear as of now. I expect that in the spirit of the competition, used kernels should be part of the 16MB limit, and so I'm working on bringing FA3 kernels into the record folder as they do over on
modded-nanogpt.The tradeoffs are getting tough. I'm sticking to my guns in losing a layer for higher vocab size, and I think everyone else is right that keeping embeddings in fp16 reduces the quant gap, which means I had to take my vocab size down to compensate. It's really a question about whether we want more diversity in vocab or more resolution in representation, and I think there's a better optimum in between yet to find.
Changes in this model from baseline:
./data/download_hf_docs_and_tokenize.py --output-root ./data --tokenizer-config ./data/tokenizer_specs.json --max-train-tokens 8000000000 --tokenizer-train-docs 100000, for a 50/50 val/train split. Tokenizers for sp1024, 2048, 4096 and 8192 with data available on my huggingfacemodded-nanogpt, replacing MuonConfiguration:
step_avg:43.67msandfinal_int8_zlib_roundtrip_exact val_bpb:1.22731147immediately before.Command:
Key metrics (from
train.log):11132/20000steps due to the wallclock cap, which is further than before!.val_loss:2.3953 val_bpb:1.1670val_loss:2.3982 val_bpb:1.1684 eval_time:1324msval_loss:2.3780 val_bpb:1.1585 eval_time:205575mstrain_time:600081ms step_avg:53.91ms15289740 bytes63530 bytes15353270 bytesTraining volume:
524288tokens/step7224688640Included files:
train_gpt.py(code snapshot used for the run)train.log(exact remote training log)submission.json(leaderboard metadata)