Skip to content

v0#1

Merged
evnkm merged 3 commits intomainfrom
evan
Mar 21, 2026
Merged

v0#1
evnkm merged 3 commits intomainfrom
evan

Conversation

@evnkm
Copy link
Copy Markdown
Owner

@evnkm evnkm commented Mar 21, 2026

No description provided.

@evnkm evnkm merged commit b6af3f1 into main Mar 21, 2026
evnkm added a commit that referenced this pull request Mar 21, 2026
Rewrote train_gpt_shared.py with full SOTA stack from #1 leaderboard
submission (10L GPT, BigramHash, SmearGate, mixed int5/int6 quant,
SWA, Muon WD=0.04, magnitude pruning, zstd-22, sliding window eval).

Baseline result: val_bpb = 1.1438 (vs SOTA 1.1428) on 8xH100 in 600s.

Added two new ideas on top:
- TrigramHashEmbedding(4096 buckets, 32-dim): captures 3-token local
  patterns beyond bigram. Adds ~147K params (~60-80KB compressed).
- Progressive QAT (int5/int6 STE fake-quantize): applied from step 0
  via CastedLinear.qat_clip to avoid costly torch.compile recompile.

Experiment openai#2 (trigram + QAT at 70% wallclock) scored 1.1630 — worse
than baseline because the torch.compile recompile at activation cost
~130s (22% of 600s budget). Fixed by moving QAT to start of training.

Other changes:
- run_modal.py: migrated from deprecated modal.Mount to Image.add_local_dir,
  fixed sys.exit(0) traceback to raise RuntimeError only on failure.
- research/IDEAS.md: full research log with 11 ranked ideas and
  experiment tracking table.

Next: run openai#3 with QAT-from-start + trigram to test without recompile
penalty, then per-layer bitwidth search to squeeze more capacity into
the 16MB budget.

Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant