Refactor model with BioDNA and FractalLinear classes :))#16
Closed
irroder wants to merge 4 commits intoopenai:mainfrom
Closed
Refactor model with BioDNA and FractalLinear classes :))#16irroder wants to merge 4 commits intoopenai:mainfrom
irroder wants to merge 4 commits intoopenai:mainfrom
Conversation
Refactor forward method to use chromosomes for weight calculation.
Updated device type in autocast to 'cpu' and modified weight calculation in the BioDNA class to return scaled chromosomes. Removed unnecessary CUDA synchronization calls and adjusted logging for compatibility.
Author
|
op op op |
mrdavtan
added a commit
to mrdavtan/parameter-golf
that referenced
this pull request
Mar 21, 2026
- Restored from qat-sliding-window branch (was never merged forward) - Updated SWA: v2 result was +0.0004 (no effect), now superseded by EMA - Updated Moonshot: added v2 flat-loops result (5.58), scale argument - Added Finding openai#15: Int5 catastrophic (gap 15x worse than int6) - Added Finding openai#16: optimizer bug (SmearGate + BigramHash frozen in all prior runs) - Added Finding openai#17: 11L step-count trap (83ms/step = 40% fewer steps) - Added Finding openai#18: FA2 positive for step time, no quality effect - Added Findings openai#19-22: XSA, EMA, TTT, NTK-RoPE (implemented, results pending) - Updated 'tested by others' section with our implementation status - Added meta-lessons: optimizer coverage, layer cost, merge window strategy
gb250e
referenced
this pull request
in gb250e/parameter-golf
Mar 21, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Forget about quantization. Forget about pruning. While everyone else is busy squeezing static matrices, I’ve implemented a Bio-Replicative Architecture that treats the 16MB limit not as a cage, but as a seed. This PR replaces the entire concept of "storing weights" with a centralized HyperNetwork Genotype. We don't ship the model, we ship the DNA and the instructions on how to grow it :))
Dynamic Morphogenesis: Every single Linear layer in this model is a ghost. It has no weights of its own. It queries the central BioDNA module to procedurally generate its own parameters on the fly based on its layer depth and function. Zero-Materialization & Associativity Hack: Most people think HyperNetworks are slow. They’re wrong. By exploiting matrix associativity - computing (X @ c) @ r.T instead of the full X @ (c @ r.T) - I’ve bypassed the need to ever create large weight matrices in VRAM. We get the representational power of a massive model with the VRAM footprint of a toy. We just slashed FLOPs by ~96% while maintaining theoretical parameter density. Infinite sxaling: architecture decouples the disk footprint from the model's actual capacity. My genotype is currently under 10MB, yet it can "grow" a model of virtually any width or depth. I’ve effectively solved the 16MB constraint.