The Vision
Programming is fundamentally changing. AI is no longer an external service you call - it's becoming a core part of how we build software. Simplex is designed from scratch for this new reality.
Most languages treat AI as an afterthought - HTTP calls to external APIs. Simplex treats AI as a first-class citizen, with language-level primitives for completion, embedding, extraction, and classification.
Let It Crash
Most programming philosophies try to prevent errors. Simplex embraces them.
Borrowed from Erlang's OTP, the "let it crash" philosophy recognizes that:
- Errors will happen no matter how careful you are
- Recovery is more important than prevention
- Supervision trees provide systematic recovery
- Simpler code results from not handling every edge case
supervisor MySystem {
strategy: OneForOne, // Only restart the failed child
max_restarts: 3, // Max 3 restarts...
within: 60.seconds, // ...within 60 seconds
children: [
child(DatabaseWorker, restart: Always),
child(CacheWorker, restart: Transient),
child(LogWorker, restart: Temporary)
]
}
Ownership Without Garbage Collection
Inspired by Rust, Simplex uses ownership-based memory management. Every value has exactly one owner, and when that owner goes out of scope, the value is deallocated.
This provides:
- Deterministic performance - no GC pauses
- Predictable memory usage - know exactly when memory is freed
- Thread safety - ownership prevents data races
- Efficient execution - no runtime garbage collection overhead
fn process_data(data: Data) { // Takes ownership
// data is now owned by this function
transform(data)
} // data is automatically freed here
fn read_data(data: &Data) { // Borrows immutably
// Can read but not modify
print(data.value)
}
fn modify_data(data: &mut Data) { // Borrows mutably
// Exclusive access to modify
data.value = 42
}
Content-Addressed Code
Inspired by Unison, Simplex identifies functions by the SHA-256 hash of their implementation. This seemingly simple idea has profound implications:
- Perfect caching - same hash means identical behavior
- Trivial distribution - send hash, not code
- No dependency hell - no version conflicts possible
- Code as data - functions are values that can be stored and transmitted
How It Works
When you define a function, Simplex computes its content hash. When another node needs that function, it first checks its cache by hash. If not found, it fetches the bytecode. The same function always has the same hash, regardless of where it was defined.
Actors as the Distribution Unit
Actors provide a natural model for distributed systems:
- Isolation - actors share nothing; communication is explicit
- Location transparency - same API for local and remote actors
- Checkpointing - state captured at message boundaries
- Migration - actors can move between nodes transparently
AI as First-Class Citizens
In most languages, AI is an external concern - you make HTTP calls to APIs. Simplex treats AI operations as built-in language primitives:
// Other languages (Python example)
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
text = response.choices[0].message.content
// Simplex
let text = await ai::complete(prompt)
Benefits of first-class AI:
- Type safety - AI operations have proper types
- Automatic batching - the runtime optimizes multiple requests
- Cost optimization - transparent model tiering
- Local inference - swarm-local models, not external APIs
Cost-Conscious Design
Simplex is designed for real-world economics:
- Spot instances - first-class support for interruptible compute
- ARM instances - optimized for cost-effective Graviton processors
- Object storage - checkpoints to S3, not expensive EBS
- Local AI - SLMs instead of expensive API calls
| Aspect | Traditional | Simplex |
|---|---|---|
| Compute | On-demand x86 | 90% spot ARM instances |
| Storage | EBS volumes | S3 object storage |
| AI | External API calls | Local SLM swarms |
| Memory | GC overhead | Ownership-based, no GC |
Key Design Decisions
Why Bytecode?
Simplex compiles to bytecode rather than native code because:
- Bytecode is portable across platforms
- Content-addressing works naturally with bytecode
- JIT compilation provides native performance where needed
- Smaller binary sizes (important for distribution)
Why Actors Over Threads?
Actors provide stronger guarantees than traditional threading:
- No shared state means no data races
- Message passing is explicit and observable
- Supervision provides systematic error recovery
- Migration across nodes is natural
Why Static Types?
Static typing catches errors at compile time:
- Type errors found before deployment
- Better IDE support and refactoring
- Documentation through types
- Performance optimization opportunities