Overview
Simplex is a programming language designed for the AI era. It combines the best ideas from Erlang, Rust, Ray, and Unison to create a language that is AI-native, distributed-first, fault-tolerant, and resumable.
At its core, Simplex is built around the concept of Cognitive Hive AI (CHAI)—an architecture where specialists collaborate within hives, sharing a single Small Language Model (SLM) and collective memory through the HiveMnemonic. Each specialist has an Anima—a cognitive core with memory, beliefs, and intentions.
Primary Goals
Simplex is designed with five primary goals:
- AI-Native: AI operations are first-class language constructs, not external API calls
- Distributed-First: Programs naturally decompose across VM swarms
- Fault-Tolerant: Any worker can die; the system continues
- Resumable: Computations checkpoint and resume transparently
- Lightweight Syntax: Simple, readable code compiling to efficient bytecode
Core Philosophy
Let It Crash
Borrowed from Erlang, this philosophy embraces failure as normal. Instead of trying to prevent all errors, Simplex provides supervision trees that automatically restart failed actors. This leads to more robust systems that can recover gracefully from unexpected conditions.
Ownership Without GC
Inspired by Rust, Simplex uses ownership-based memory management. Every value has exactly one owner, and when that owner goes out of scope, the value is automatically deallocated. This provides deterministic memory behavior without garbage collection pauses.
Content-Addressed Code
Like Unison, Simplex identifies functions by their SHA-256 hash. This means:
- Same hash = identical behavior = perfect caching
- Trivial serialization across networks (send hash, not code)
- No dependency conflicts
- Seamless distribution across nodes
Specialists as Cognitive Units
Simplex v0.5.0 introduced specialists—actors enhanced with cognitive capabilities. Specialists have an Anima (memory, beliefs, intentions), communicate via message passing, checkpoint at message boundaries, and share a per-hive SLM. This makes cognition a natural consequence of the programming model.
The Anima: Cognitive Core
Every specialist in Simplex has an Anima—a cognitive core that provides persistent memory, belief systems, and goal-directed behavior. Unlike stateless LLM calls, the Anima remembers, learns, and evolves.
specialist Analyst {
// Cognitive core with memory and beliefs
anima: {
episodic: { capacity: 500 }, // Past experiences
semantic: { capacity: 2000 }, // Learned facts
beliefs: { revision_threshold: 30 },
},
fn analyze(doc: Document) -> Analysis {
// Recall relevant past experiences
let similar = self.anima.recall_similar(doc, limit: 5)
// Use beliefs to guide approach
let strategy = self.anima.beliefs.get("analysis_strategy")
// Perform inference with memory context
let result = infer(doc, context: similar)
// Remember this experience
self.anima.remember(doc.summary, result)
result
}
}
The Anima provides four memory types:
- Episodic: Specific experiences with timestamps and context
- Semantic: Generalized facts extracted from experience
- Procedural: Learned skills and behavioral patterns
- Working: Active context for the current task
Specialists & Hives
Specialists are organized into hives—collaborative groups that share a single SLM and collective memory through the HiveMnemonic.
hive DocumentProcessing {
// Shared SLM for all specialists (4.1 GB total, not per-specialist)
slm: "simplex-cognitive-7b",
// Shared consciousness across all specialists
mnemonic: {
episodic: { capacity: 1000 },
semantic: { capacity: 5000 },
beliefs: { revision_threshold: 50 },
},
// Specialists collaborate within the hive
specialists: [Extractor, Summarizer, Classifier],
}
Per-Hive SLM Efficiency
In traditional architectures, 10 AI agents would load 10 copies of a 7B model (40+ GB). In Simplex, 10 specialists share one 4.1 GB model through the hive. Memory scales with hives, not specialists.
Neural Gates: Learnable Control Flow
Introduced in v0.6.0, Neural Gates transform traditional conditionals into differentiable decision points. During training, gates return continuous probability values (0.0-1.0) allowing gradients to flow through decision logic. After training, they compile to efficient discrete branches with zero runtime overhead.
How Neural Gates Work
Traditional if-else statements have zero gradient—you can't backpropagate through
a hard conditional. Neural Gates use sigmoid relaxation and Gumbel-Softmax
to create smooth, differentiable approximations during training:
// Binary neural gate - learns optimal threshold
neural_gate should_escalate(confidence: f64) -> Bool {
confidence > 0.7 // Threshold optimizes during training
}
// Categorical gate - learns routing decisions
neural_gate route_to_specialist(query: Embedding) -> Specialist {
match classify(query) {
Category::Technical => Specialist::Engineer,
Category::Creative => Specialist::Designer,
Category::Business => Specialist::Analyst,
}
}
Compilation Modes
The same neural gate compiles differently based on mode:
| Mode | Command | Behavior |
|---|---|---|
| Training | sxc build --mode=train |
Soft gates with gradient tracking, temperature annealing |
| Inference | sxc build --mode=infer |
Hard branches, zero overhead, production-ready |
| Profile | sxc build --mode=profile |
Hard gates with activation statistics for pruning |
Contract Verification
For safety-critical decisions, Neural Gates support contracts with confidence bounds and fallback strategies:
neural_gate is_safe(analysis: SecurityResult) -> Bool
requires analysis.confidence > 0.95 // Minimum confidence
ensures result => no_vulnerability // Postcondition
fallback safe_default() // If confidence too low
{
analysis.passed
}
Programs That Learn Their Own Logic
Neural Gates enable programs that evolve. Instead of manually tuning thresholds and decision boundaries, you train them from data. The program learns its own optimal logic, then compiles to efficient code for production.
Real-Time Continuous Learning
v0.7.0 introduced the simplex-learning library, enabling specialists
to learn and adapt during runtime without offline batch training. Each interaction
makes your AI smarter.
The OnlineLearner
The core component for real-time adaptation. Configure optimizers, safety constraints, and fallback strategies:
use simplex_learning::{OnlineLearner, StreamingAdam, SafeFallback};
// Create an online learner with safety constraints
let learner = OnlineLearner::new(model_params)
.optimizer(StreamingAdam::new(0.001))
.constraint(MaxLatency(10.0)) // Safety bound
.fallback(SafeFallback::with_default(safe_output));
// Learn from each interaction
for (input, feedback) in user_interactions {
let output = learner.forward(&input);
learner.learn(&feedback); // Adapts in real-time!
}
Streaming Optimizers
Optimizers designed for single-sample or small-batch updates with gradient accumulation:
- StreamingSGD – SGD with momentum and gradient clipping
- StreamingAdam – Adam with configurable accumulation steps
- AdamW – Decoupled weight decay for better generalization
Safety Constraints & Fallbacks
Production systems need guardrails. The SafeLearner wrapper ensures learning
doesn't destabilize your specialists:
use simplex_learning::safety::{SafeLearner, SafeFallback};
let safe_learner = SafeLearner::new(learner, SafeFallback::last_good())
.with_validator(|output| output.is_valid())
.max_failures(3);
// Fallback strategies:
// Default - Return predefined safe output
// LastGood - Return last successful output
// Checkpoint - Restore from saved state
// SkipUpdate - Continue without learning
Federated Learning Across Hives
Coordinate learning across distributed specialists with multiple aggregation strategies:
use simplex_learning::distributed::{FederatedLearner, AggregationStrategy};
let federated = FederatedLearner::new(config)
.strategy(AggregationStrategy::PerformanceWeighted)
.min_nodes(3);
// Specialists submit local updates
federated.submit_update(node_id, local_params, metrics);
// Global model improves from collective learning
let global = federated.global_params();
Continuously Evolving Systems
With Neural Gates and Real-Time Learning, Simplex programs don't just run—they evolve. Decision logic optimizes from training data. Specialists adapt from user feedback. Knowledge propagates across hives through federated learning. The entire system becomes smarter over time without manual intervention or redeployment.
Execution Model
Simplex programs run on the Simplex Virtual Machine (SVM), a lightweight runtime designed to run efficiently on small instances (512MB RAM). Multiple SVMs form a swarm that can scale horizontally.
Simplex Source Code
|
Simplex Compiler (type checking, optimization)
|
Simplex Bytecode (.sbc files)
|
Simplex Virtual Machine (SVM)
|-- Bytecode Executor
|-- Actor Scheduler
|-- Checkpoint Manager
|-- Message Router
|-- Swarm Client
|
Swarm Network (multiple SVM nodes)
|-- Coordinator (Raft consensus)
|-- Checkpoint Store (S3/Blob)
|-- AI Inference Pool
Type System
Simplex uses static typing with aggressive type inference. You get the safety of static types while writing code that looks almost dynamically typed:
// Type inference - no annotations needed
let x = 42 // Inferred as i64
let name = "Simplex" // Inferred as String
let items = [1, 2, 3] // Inferred as List<i64>
// Explicit types when needed
let config: Map<String, Value> = {}
// Algebraic data types
enum Result<T, E> {
Ok(T),
Err(E)
}
// Pattern matching
match result {
Ok(value) => process(value),
Err(e) => handle_error(e)
}
First-Class AI Integration
Unlike other languages where AI is bolted on through external APIs, Simplex treats AI
operations as native language constructs. The infer() function uses the
hive's shared SLM, while memory operations integrate with the Anima:
// Inference using the hive's shared SLM
let response = infer(prompt)
let response = infer(prompt, context: memories)
// Structured extraction with type safety
let invoice = infer<InvoiceData>(document)
// Memory operations through the Anima
self.anima.remember(key, value)
let memories = self.anima.recall_for(query, limit: 5)
// Belief management with confidence scores
self.anima.believe("user_preference", value, confidence: 0.8)
let belief = self.anima.beliefs.get("user_preference")
// Share knowledge with the hive
hive.mnemonic.learn(insight, confidence: 0.75)
Language Influences
| Language/System | What Simplex Borrowed |
|---|---|
| Erlang/OTP | Actor model, supervision trees, "let it crash" philosophy, fault tolerance patterns |
| Rust | Ownership-based memory management, algebraic data types, pattern matching, traits |
| Ray | Distributed computing primitives, actor placement strategies, cluster scaling |
| Unison | Content-addressed code, function hashing, code distribution |
Dual Numbers: Automatic Differentiation
v0.8.0 introduced Dual Numbers as a native language type, enabling forward-mode automatic differentiation with zero overhead. Dual numbers compute exact gradients automatically as your code runs.
let x: dual = dual::variable(3.0); // value=3, derivative=1
let y = x * x + x.sin(); // Derivatives auto-propagate
println(y.val); // f(3) = 9.1411...
println(y.der); // f'(3) = 6.9899... (exact!)
Edge Hive: Local-First AI
v0.9.0 brings Edge Hive—Cognitive Hive AI running on edge devices from smartwatches to desktops. Edge Hive provides local-first intelligence with privacy by default, offline capability, and seamless cloud fallback.
Privacy by Default
With Edge Hive, your data stays on your device. All beliefs, cache, and configuration are encrypted with AES-256-GCM. Only when local confidence is low does Edge Hive escalate to the cloud—and only with minimum necessary context.
Self-Learning Optimization
Also in v0.9.0, Self-Learning Annealing enables hyperparameters that optimize themselves. Instead of manually tuning learning rates and temperature schedules, the system discovers optimal values through meta-gradients.
Current Focus: Production Readiness
With Dual Numbers (v0.8.0) and Edge Intelligence (v0.9.0) now delivered, Simplex development is focused on production readiness and v1.0.0 stabilization.
Standard Library
The Simplex standard library now includes:
- simplex-learning - Online learning, federated training, knowledge distillation
- simplex-training - Self-learning schedules, meta-optimization
- simplex-http - Actor-based HTTP servers and WebSocket support
- simplex-inference - Local SLM inference via llama.cpp
- edge-hive - Local-first AI on edge devices
What's Next
The roadmap outlines the path to 1.0:
- v0.10.0 - GPU acceleration for tensor operations
- v1.0.0 - Production release with API stability guarantees