Skip to content

Record: 11L EMA + BigramHash(12288) + Mixed Int5 + FA3 (1.1354)#466

Open
simonbissonnette wants to merge 1 commit intoopenai:mainfrom
simonbissonnette:submission/11l-ema-bigram12288-mixed-int5-fa3
Open

Record: 11L EMA + BigramHash(12288) + Mixed Int5 + FA3 (1.1354)#466
simonbissonnette wants to merge 1 commit intoopenai:mainfrom
simonbissonnette:submission/11l-ema-bigram12288-mixed-int5-fa3

Conversation

@simonbissonnette
Copy link
Copy Markdown

Summary

This PR adds a main-track submission attempt for the Parameter Golf challenge based on an 11-layer, 512-dim model with:

  • EMA (0.997)
  • BigramHash (12288, dim 128)
  • mixed low-bit quantization
  • stride-64 sliding evaluation

3-Seed Results

  • Seed 42: 1.13593695 val_bpb, 15,967,704 bytes total
  • Seed 471: 1.13389376 val_bpb, 15,663,365 bytes total
  • Seed 777: 1.13626774 val_bpb, 15,660,237 bytes total
  • Mean: 1.135366
  • Std: 0.001286

Notes

  • All three runs use the same train_gpt.py snapshot and the same hyperparameter recipe, differing only by seed.
  • This draft explicitly discloses that the current FA3 path uses kernels-community/flash-attn3, which fetches the FA3 kernel package at runtime.
  • No external model weights, prompts, or user code are fetched; the concern is only the runtime acquisition of the FA3 kernel package itself.
  • All three archived logs end with the final exact metric line.
  • I understand this may not beat the current open-PR SOTA, but I still wanted to submit a clean, reproducible main-track attempt.

@mohosy
Copy link
Copy Markdown

mohosy commented Mar 23, 2026

12288 buckets is a nice bump over 10240, did you ablate that or just go bigger for the hell of it lol. also the fa3 kernel fetch disclosure is appreciated thats good practice

@simonbissonnette
Copy link
Copy Markdown
Author

12288 buckets is a nice bump over 10240, did you ablate that or just go bigger for the hell of it lol. also the fa3 kernel fetch disclosure is appreciated thats good practice

Thanks!

After some initial quantization work, I ended up with a bit of spare artifact budget, so I used part of it to increase BigramHash and improve BPB.

12288 ended up being the best practical tradeoff for this submission after a few trial-and-error runs. It gave a real gain over the smaller setting while still keeping all 3 submission seeds under the 16 MB cap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants