The Vision
Programming is fundamentally changing. AI is no longer an external service you call - it's becoming a core part of how we build software. Simplex is designed from scratch for this new reality.
Open Source to the Core
Simplex is open source not as a marketing strategy, but as a foundational belief. The tools that shape how we build intelligent systems should belong to everyone. Every line of code, every architectural decision, every optimization is visible, auditable, and improvable by the community.
We believe that AI-native programming is too important to be controlled by any single company. The future of software development should be built in the open, by developers, for developers.
Breaking Free from Black Box AI
Today's AI landscape is dominated by proprietary APIs from tech giants. Your data goes into their servers, gets processed by models you can't inspect, and returns through infrastructure you don't control. You're building on a foundation you can't see, can't modify, and can't trust to remain stable.
Simplex takes a different path. By embracing Small Language Models that run locally, you get:
- Transparency - inspect and understand the models you use
- Privacy - your data never leaves your infrastructure
- Control - no API changes, rate limits, or surprise deprecations
- Independence - your applications work regardless of big tech decisions
AI for Everyone, Not Just Enterprises
The current economics of AI are broken. A single GPT-4 query can cost more than running a server for an hour. This prices out individuals, startups, and organizations in developing nations from the AI revolution.
Simplex is designed to make intelligent applications affordable at any scale:
- Local SLMs - run 7B parameter models on commodity hardware
- Spot instances - leverage 90% cheaper interruptible compute
- Object storage - use S3 instead of expensive block storage
- ARM processors - optimize for cost-effective Graviton chips
A student in Lagos should have the same access to AI-powered development as an engineer at a Fortune 500 company. That's not idealism - it's the design goal.
The Math of Democratization
Running a local 7B model costs approximately $0.0001 per query on commodity hardware. The same query to GPT-4 costs $0.03-0.06. That's a 300-600x cost reduction. This isn't just an optimization - it's what makes AI accessible to the rest of the world.
Let It Crash
Most programming philosophies try to prevent errors. Simplex embraces them.
Borrowed from Erlang's OTP, the "let it crash" philosophy recognizes that:
- Errors will happen no matter how careful you are
- Recovery is more important than prevention
- Supervision trees provide systematic recovery
- Simpler code results from not handling every edge case
supervisor MySystem {
strategy: OneForOne, // Only restart the failed child
max_restarts: 3, // Max 3 restarts...
within: 60.seconds, // ...within 60 seconds
children: [
child(DatabaseWorker, restart: Always),
child(CacheWorker, restart: Transient),
child(LogWorker, restart: Temporary)
]
}
Ownership Without Garbage Collection
Inspired by Rust, Simplex uses ownership-based memory management. Every value has exactly one owner, and when that owner goes out of scope, the value is deallocated.
This provides:
- Deterministic performance - no GC pauses
- Predictable memory usage - know exactly when memory is freed
- Thread safety - ownership prevents data races
- Efficient execution - no runtime garbage collection overhead
fn process_data(data: Data) { // Takes ownership
// data is now owned by this function
transform(data)
} // data is automatically freed here
fn read_data(data: &Data) { // Borrows immutably
// Can read but not modify
print(data.value)
}
fn modify_data(data: &mut Data) { // Borrows mutably
// Exclusive access to modify
data.value = 42
}
Content-Addressed Code
Inspired by Unison, Simplex identifies functions by the SHA-256 hash of their implementation. This seemingly simple idea has profound implications:
- Perfect caching - same hash means identical behavior
- Trivial distribution - send hash, not code
- No dependency hell - no version conflicts possible
- Code as data - functions are values that can be stored and transmitted
How It Works
When you define a function, Simplex computes its content hash. When another node needs that function, it first checks its cache by hash. If not found, it fetches the bytecode. The same function always has the same hash, regardless of where it was defined.
Actors as the Distribution Unit
Actors provide a natural model for distributed systems:
- Isolation - actors share nothing; communication is explicit
- Location transparency - same API for local and remote actors
- Checkpointing - state captured at message boundaries
- Migration - actors can move between nodes transparently
AI as First-Class Citizens
In most languages, AI is an external concern - you make HTTP calls to APIs. Simplex treats AI operations as built-in language primitives:
// Other languages (Python example)
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
text = response.choices[0].message.content
// Simplex
let text = await ai::complete(prompt)
Benefits of first-class AI:
- Type safety - AI operations have proper types
- Automatic batching - the runtime optimizes multiple requests
- Cost optimization - transparent model tiering
- Local inference - swarm-local models, not external APIs
Cost-Conscious Design
Simplex is designed for real-world economics:
- Spot instances - first-class support for interruptible compute
- ARM instances - optimized for cost-effective Graviton processors
- Object storage - checkpoints to S3, not expensive EBS
- Local AI - SLMs instead of expensive API calls
| Aspect | Traditional | Simplex |
|---|---|---|
| Compute | On-demand x86 | 90% spot ARM instances |
| Storage | EBS volumes | S3 object storage |
| AI | External API calls | Local SLM swarms |
| Memory | GC overhead | Ownership-based, no GC |
Key Design Decisions
Why Bytecode?
Simplex compiles to bytecode rather than native code because:
- Bytecode is portable across platforms
- Content-addressing works naturally with bytecode
- JIT compilation provides native performance where needed
- Smaller binary sizes (important for distribution)
Why Actors Over Threads?
Actors provide stronger guarantees than traditional threading:
- No shared state means no data races
- Message passing is explicit and observable
- Supervision provides systematic error recovery
- Migration across nodes is natural
Why Static Types?
Static typing catches errors at compile time:
- Type errors found before deployment
- Better IDE support and refactoring
- Documentation through types
- Performance optimization opportunities