Look, I've been obsessed with self-replicating systems ever since I read about von Neumann probes in computer science. But here's what really drives me crazy: **Why do we have to beg Big Tech for permission to build our own infrastructure?**
Every startup, every side project, every brilliant idea - they all end up having the same conversation with AWS, Google Cloud, or Azure. We've created a world where innovation requires asking permission from trillion-dollar gatekeepers who can change their terms, raise prices, or shut you down at will.
What if we could use the power of open source to create our own world instead?
I got excited when I realized my NixOS configuration doesn't just manage one system - it can recursively build ISOs that contain the blueprint to create more systems. It's like digital DNA that propagates itself across infrastructure.
Quick Primer: What Are von Neumann Probes?
If you're not familiar with von Neumann probes (and that's totally fine!), here's the concept that blew my mind when I first learned about it:
In the 1940s, mathematician John von Neumann theorized about machines that could:
- Self-replicate - Build exact copies of themselves
- Explore - Travel to new locations or environments
- Utilize resources - Use local materials to construct more probes
- Exponentially expand - Each probe creates multiple new probes
Originally conceived for space exploration (imagine a single probe that could explore an entire galaxy by replicating itself!), the concept applies perfectly to infrastructure. Instead of exploring space, my NixOS probes explore networks and deploy themselves across hardware.
The key insight: If you can encode both the instructions AND the ability to replicate those instructions, you create a system that can propagate infinitely. That's exactly what I've done with NixOS configurations.
What I Actually Built
My NixOS flake includes two types of "von Neumann probe" configurations:
- Basic Safe Installer - Manual partitioning, educational process
- Advanced Disko Installer - Fully automated, immediate deployment
But here's where it gets wild: each installed system gets a copy of the complete flake source and the ability to build more ISOs. It's recursive infrastructure that can spawn more infrastructure.
Why I'm Obsessed With This Approach
- Exponential Deployment - One seed system can create dozens of others
- Genetic Evolution - Configs can mutate and improve across generations
- Self-Healing Networks - Systems can rebuild themselves and neighbors
- Disaster Recovery - Any surviving system can recreate the entire infrastructure
- Edge Deployment - Perfect for disconnected environments
How It Works: The Recursive Magic
My flake.nix defines both the target systems AND the installer systems:
nixosConfigurations = {
# The "probe" installer systems
installer = nixpkgs.lib.nixosSystem {
modules = [ ./systems/basic-installer.nix ];
};
# Target systems that can spawn more probes
nixpad = nixpkgs.lib.nixosSystem {
modules = [
./systems/nixpad.nix
./nixos/iso-builder-safe.nix # The magic happens here
];
};
};
packages.x86_64-linux = {
# ISOs that contain the full flake
installer-iso = nixos-generators.nixosGenerate {
modules = [ ./systems/basic-installer.nix ];
format = "iso";
};
};
Every deployed system includes this shell command that creates new "offspring":
create-installer-iso
# Builds: nix build .#packages.x86_64-linux.installer-iso
# Creates: nixos-installer.iso ready for deployment
# Contains: Complete flake source + all configurations
The Missing Piece: Distributed Binary Caches
Here's what makes this truly powerful - each probe can serve as a binary cache for other probes. My config includes distributed build and cache infrastructure:
# Each system can serve cached packages
services.nix-serve.enable = true;
# Probes can build across the network
nix.buildMachines = [{
hostName = "nixbase.hedgehog-augmented.ts.net";
maxJobs = 32;
speedFactor = 3;
supportedFeatures = ["kvm" "big-parallel"];
}];
nix.distributedBuilds = true;
nix.extraOptions = "builders-use-substitutes = true";
This means:
- No rebuilding - Probes share compiled packages across the network
- Faster deployment - New systems download binaries instead of compiling
- Resource pooling - Powerful systems build for weaker ones
- Offline capability - Each probe can cache for disconnected deployments
Real-World Use Cases I Discovered
1. Edge Computing Deployment
Deploy one system to a remote location. It can then spawn specialized configs for local infrastructure without needing external connectivity.
2. Disaster Recovery Networks
Any surviving system in your fleet can recreate the entire infrastructure from its embedded DNA. No external dependencies, no single points of failure.
3. Development Environment Propagation
Developers can create exact replicas of production environments, then evolve and test changes before merging back to the main config.
4. Automated Lab Provisioning
Perfect for testing environments - spin up dozens of specialized configs for load testing, then tear them down. Takes about 10 minutes per system.
The Safety Implementation
Here's What I Actually Do to prevent accidental data destruction:
My disko installer uses a triple-confirmation system:
⚠️ CRITICAL WARNING: DESTRUCTIVE OPERATION AHEAD
Current contents of /dev/sda that will be DESTROYED:
SAFETY CHECK 1/3: Confirm disk selection
Type the EXACT disk path to confirm: /dev/sda
SAFETY CHECK 2/3: Confirm destructive operation
Type 'DESTROY' (in capital letters) to confirm: DESTROY
SAFETY CHECK 3/3: Final confirmation
Final confirmation - proceed? (yes/no): yes
Starting in 5 seconds... Press Ctrl+C to abort
No disko? Just use the basic installer with manual partitioning. It's educational and 100% safe.
Why This Matters: True Self-Propagating Infrastructure
We're moving toward infrastructure-as-code, but most solutions still require external orchestration. NixOS von Neumann probes are different - they're self-contained, self-propagating infrastructure that carries both deployment instructions AND optimized binaries.
This matters because:
- Resilience - Infrastructure that can rebuild itself and its neighbors
- Efficiency - Distributed caches eliminate redundant compilation
- Autonomy - No external dependencies or orchestration layers
- Reproducibility - Bit-for-bit identical deployments across the network
- Evolution - Configs can improve and spread across generations
- Resource Sharing - Powerful systems build for resource-constrained ones
The Real Problem This Solves: Breaking Free from Big Tech
Here's the vision that keeps me up at night: **What if we never had to talk to Big Tech again?**
Right now, every innovative project follows the same depressing pattern:
- Great idea ✅
- Build MVP ✅
- Need infrastructure... 😰
- Negotiate with AWS/Google/Azure 💸
- Accept their terms, pricing, and control 😞
- Hope they don't change the rules 🤞
But von Neumann probes flip this completely. Imagine instead:
- Great idea ✅
- Deploy first probe to any spare hardware ✅
- Probe spawns infrastructure as needed ✅
- Community contributes resources, earns rewards ✅
- No permission needed, no gatekeepers 🎉
The Open Source Infrastructure Revolution
We're not just building self-replicating servers - we're building **economic independence**. A world where:
- Your neighbor's spare laptop can host your startup's backend
- Community-owned data centers compete with corporate clouds
- Developers earn tokens by contributing compute resources
- Small businesses get enterprise-grade infrastructure without enterprise prices
- Innovation happens without asking permission from trillion-dollar gatekeepers
Every probe that deploys is a vote against centralized control. Every cache that shares resources is wealth flowing to individuals instead of corporations. Every new configuration that spreads is innovation that belongs to the commons, not shareholders.
The Next Evolution: Blockchain-Governed Probes
Here's where this gets really wild - imagine attaching the probe configs to a ledger system. The blockchain controls who can edit and deploy new probe generations:
# Hypothetical blockchain-governed flake
{
inputs.governance-chain.url = "github:your-org/dao-governed-configs";
# Only signed commits from approved governance keys can update
outputs = { governance-chain, ... }:
governance-chain.lib.validateAndBuild {
proposalId = "INFRA-2025-001";
requiredSignatures = 3; # Multi-sig governance
emergencyOverride = false;
};
}
This creates:
- Decentralized Infrastructure Governance - No single admin can compromise the fleet
- Immutable Audit Trail - Every config change is cryptographically signed and recorded
- Democratic Evolution - Stakeholders vote on infrastructure changes
- Emergency Rollback - Consensus-based reversion to known-good states
- Permissioned Replication - Only authorized keys can spawn new probes
- Economic Incentives - Reward nodes that provide compute/storage resources
Real-World Scenarios
DAO Infrastructure: A decentralized organization runs infrastructure where config changes require governance token holder votes. Probes automatically enforce the consensus.
Multi-Tenant Cloud: Different teams deploy probes, but all changes go through blockchain governance. No team can accidentally break another's infrastructure.
Critical Infrastructure: Power grids, communications networks where any config change requires cryptographic proof from multiple authorized entities.
The blockchain becomes the "DNA authorization system" - determining which genetic modifications are allowed to propagate through the infrastructure ecosystem.
Ready to Build Your Own von Neumann Fleet?
Start with these commands in any NixOS flake:
# Add to your flake.nix
nixosConfigurations.installer = nixpkgs.lib.nixosSystem {
modules = [ ./installer.nix ];
};
packages.x86_64-linux.installer-iso = nixos-generators.nixosGenerate {
modules = [ ./installer.nix ];
format = "iso";
};
Then create your first probe:
nix build .#packages.x86_64-linux.installer-iso
cp result/iso/*.iso von-neumann-probe.iso
Each system you deploy from that ISO can create more ISOs. It's recursive infrastructure all the way down.
The future of infrastructure isn't just declarative or self-replicating - it's **democratically governed, cryptographically secured, and economically incentivized**. We're talking about infrastructure that:
- Replicates itself autonomously
- Shares resources efficiently through distributed caches
- Evolves through democratic consensus
- Enforces security through cryptographic proof
- Rewards participants for contributing resources
With NixOS providing the technical foundation and blockchain providing the governance layer, we can build infrastructure that truly governs itself. The probes don't just carry code - they carry the social contract of how that code can evolve.
This isn't just about better infrastructure. This is about **economic liberation**.
Every developer who deploys a probe instead of paying AWS is keeping wealth in the community. Every startup that launches on community hardware instead of corporate clouds is a victory for decentralization. Every innovation that spreads through open source probes instead of proprietary platforms is freedom winning over control.
We have the technology right now to build a world where innovation doesn't require permission from trillion-dollar gatekeepers. Where infrastructure belongs to the people who build and maintain it. Where your brilliant idea can come to life without asking Big Tech for permission.
The question isn't whether we can build this world. The question is: **Are we brave enough to choose it?**
Content on this blog was created using human and AI-assisted workflows described here. Original ideas and editorial decisions by Justin Quaintance.