Learn and Burn

Learn and Burn

Home
Archive
About
Measuring a model's understanding — starting with path-finding
[Paper: Evaluating the World Model Implicit in a Generative Model]
Nov 26, 2024 • Unbox Research
Making LLMs scalable by replacing weights with learnable tokens
[Paper: Tokenformer: Rethinking Transformer Scaling with Tokenized Model Parameters]
Nov 19, 2024 • Unbox Research
Image generation for infinite games
[Paper: Unbounded: A Generative Infinite Game of Character Life Simulation]
Nov 10, 2024 • Unbox Research
Do LLMs rely on data contamination to solve math problems?
[Paper: GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models]
Nov 2, 2024 • Unbox Research
Running an LLM on a small customizable chip
[Paper: LlamaF: An Efficient Llama2 Architecture Accelerator on Embedded FPGAs]
Oct 30, 2024 • Unbox Research
Better language models with negative attention
[Paper: Differential Transformer]
Oct 18, 2024 • Unbox Research
A serious look at the future of AI medical advice
[Paper: A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?]
Oct 12, 2024 • Unbox Research
LLMs have original, research-worthy ideas
[Paper: Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers]
Oct 8, 2024 • Unbox Research
Learn and Burn
Learn and Burn
Weekly updates on advances in learnings of the machine variety.

Learn and Burn

AboutArchiveSitemap
© 2026 Unbox Research · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture