☢️ Uranium Research Series
Hardware is algorithmic. Binary weights learn. Gradients are optional. Self-conditioning is the universal failure mode.
By
Artifact Virtual
— Ali Shakil & AVA
📦 Dataset Repository
Five papers exploring the fundamental physics of neural computation — from GPUs as algorithmic substrates, through binary-weight learning, to the discovery that autoregressive models inevitably poison themselves.
IEEE Papers
I — GPU as Code
The GPU isn't hardware running software — it IS the algorithm
II — 1-Bit Intelligence
Binary weights that learn at the thermodynamic minimum of information
III — Progressive Expansion
Net2Net growth: train small, expand deterministically, continue training
IV — Layer 7 Gateway
The boundary between invariant processing (layers 0-6) and plastic cognition (layers 7+)
V — Ghost Protocol
Autoregressive self-poisoning — why self-training inevitably collapses
Pre-IEEE Drafts
GPU as Code (draft)
Original pre-IEEE version
Progressive Expansion (draft)
Pre-IEEE version
Layer 7 Gateway (draft)
Pre-IEEE version with extended analysis