Skip to content

Refactor model with BioDNA and FractalLinear classes :))#16

Closed
irroder wants to merge 4 commits intoopenai:mainfrom
irroder:main
Closed

Refactor model with BioDNA and FractalLinear classes :))#16
irroder wants to merge 4 commits intoopenai:mainfrom
irroder:main

Conversation

@irroder
Copy link
Copy Markdown

@irroder irroder commented Mar 18, 2026

Forget about quantization. Forget about pruning. While everyone else is busy squeezing static matrices, I’ve implemented a Bio-Replicative Architecture that treats the 16MB limit not as a cage, but as a seed. This PR replaces the entire concept of "storing weights" with a centralized HyperNetwork Genotype. We don't ship the model, we ship the DNA and the instructions on how to grow it :))

Dynamic Morphogenesis: Every single Linear layer in this model is a ghost. It has no weights of its own. It queries the central BioDNA module to procedurally generate its own parameters on the fly based on its layer depth and function. Zero-Materialization & Associativity Hack: Most people think HyperNetworks are slow. They’re wrong. By exploiting matrix associativity - computing (X @ c) @ r.T instead of the full X @ (c @ r.T) - I’ve bypassed the need to ever create large weight matrices in VRAM. We get the representational power of a massive model with the VRAM footprint of a toy. We just slashed FLOPs by ~96% while maintaining theoretical parameter density. Infinite sxaling: architecture decouples the disk footprint from the model's actual capacity. My genotype is currently under 10MB, yet it can "grow" a model of virtually any width or depth. I’ve effectively solved the 16MB constraint.

irroder and others added 3 commits March 18, 2026 23:19
Refactor forward method to use chromosomes for weight calculation.
Updated device type in autocast to 'cpu' and modified weight calculation in the BioDNA class to return scaled chromosomes. Removed unnecessary CUDA synchronization calls and adjusted logging for compatibility.
@irroder
Copy link
Copy Markdown
Author

irroder commented Mar 18, 2026

op op op

@0hq 0hq closed this Mar 19, 2026
mrdavtan added a commit to mrdavtan/parameter-golf that referenced this pull request Mar 21, 2026
- Restored from qat-sliding-window branch (was never merged forward)
- Updated SWA: v2 result was +0.0004 (no effect), now superseded by EMA
- Updated Moonshot: added v2 flat-loops result (5.58), scale argument
- Added Finding openai#15: Int5 catastrophic (gap 15x worse than int6)
- Added Finding openai#16: optimizer bug (SmearGate + BigramHash frozen in all prior runs)
- Added Finding openai#17: 11L step-count trap (83ms/step = 40% fewer steps)
- Added Finding openai#18: FA2 positive for step time, no quality effect
- Added Findings openai#19-22: XSA, EMA, TTT, NTK-RoPE (implemented, results pending)
- Updated 'tested by others' section with our implementation status
- Added meta-lessons: optimizer coverage, layer cost, merge window strategy
gb250e referenced this pull request in gb250e/parameter-golf Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants