Prerequisites
Before starting, ensure you have:
- Simplex toolchain installed (see Getting Started)
- Ollama running locally for SLM inference
- A code editor (VS Code with Simplex extension recommended)
- Basic familiarity with programming concepts
Verify your installation:
$ sxc --version
Simplex Compiler 0.5.1
$ ollama list
NAME SIZE
llama3.2:3b 2.0 GB
Environment Setup
1. Pull an SLM via Ollama
Simplex works with any Ollama model. Pull one to get started:
$ ollama pull llama3.2:3b
# Or use a smaller model for development
$ ollama pull llama3.2:1b
2. Configure Your Editor
For VS Code, install the Simplex extension:
$ code --install-extension simplex.simplex-lang
This provides syntax highlighting, LSP integration, and inline diagnostics.
Create Project
Use sxpm (Simplex Package Manager) to create a new project:
$ sxpm new my-assistant
Creating project 'my-assistant'...
Created: my-assistant/
Created: my-assistant/simplex.toml
Created: my-assistant/src/main.sx
Created: my-assistant/tests/
$ cd my-assistant
Project Structure
my-assistant/
├── simplex.toml # Project configuration
├── src/
│ └── main.sx # Entry point
└── tests/
└── main_test.sx # Test file
Configure simplex.toml
[package]
name = "my-assistant"
version = "0.1.0"
authors = ["Your Name"]
[dependencies]
# Add dependencies here
[slm]
# Default model for hives
model = "llama3.2:3b"
provider = "ollama"
endpoint = "http://localhost:11434"
First Specialist
Let's create a simple assistant specialist. Edit src/main.sx:
// A simple assistant specialist
specialist Assistant {
// Basic anima configuration
anima: {
episodic: { capacity: 100 },
semantic: { capacity: 500 },
beliefs: { revision_threshold: 30 },
},
// Handle chat messages
receive Chat(message: String) -> String {
// Generate response using hive SLM
let response = infer(message)
// Checkpoint state
checkpoint()
response
}
}
// Entry point
fn main() {
// Spawn the assistant
let assistant = spawn Assistant
// Send a message and get response
let reply = ask(assistant, Chat("Hello! What can you help me with?"))
println(reply)
}
Build and run:
$ sxpm build
Compiling my-assistant v0.1.0
Finished release target in 0.42s
$ sxpm run
Hello! I'm an AI assistant. I can help you with:
- Answering questions
- Analyzing information
- Writing and editing text
...
Add Memory
Let's enhance our assistant to remember conversations:
specialist Assistant {
anima: {
episodic: { capacity: 100 },
semantic: { capacity: 500 },
beliefs: { revision_threshold: 30 },
},
receive Chat(message: String) -> String {
// Recall relevant past conversations
let context = self.anima.recall_for(message, limit: 5)
// Generate response with memory context
let response = infer(message, context: context)
// Remember this exchange
self.anima.remember("user", message)
self.anima.remember("assistant", response)
checkpoint()
response
}
// Learn a fact about the user
receive Learn(fact: String, confidence: f64) {
self.anima.believe("user_fact", fact, confidence: confidence)
checkpoint()
}
}
Now the assistant remembers past interactions and can learn facts about users!
Create a Hive
For more complex projects, organize specialists into a hive:
// Specialist for analyzing questions
specialist Analyzer {
anima: {
episodic: { capacity: 200 },
semantic: { capacity: 1000 },
beliefs: { revision_threshold: 30 },
},
receive Analyze(query: String) -> Analysis {
let context = self.anima.recall_for(query, limit: 3)
let result = infer<Analysis>(query, context: context)
self.anima.remember(query, result)
result
}
}
// Specialist for generating responses
specialist Responder {
anima: {
episodic: { capacity: 200 },
semantic: { capacity: 1000 },
beliefs: { revision_threshold: 30 },
},
receive Respond(analysis: Analysis) -> String {
let style = self.anima.beliefs.get("response_style")
infer(analysis.summary, system: style.value)
}
}
// Hive that coordinates the specialists
hive AssistantHive {
// Shared SLM (loaded once, used by all)
slm: "llama3.2:3b",
// Shared memory across specialists
mnemonic: {
episodic: { capacity: 500 },
semantic: { capacity: 2000 },
beliefs: { revision_threshold: 50 },
},
// Specialists in this hive
specialists: [Analyzer, Responder],
}
fn main() {
// Create the hive
let hive = spawn AssistantHive
// Spawn specialists within the hive
let analyzer = hive.spawn<Analyzer>()
let responder = hive.spawn<Responder>()
// Process a query through the pipeline
let analysis = ask(analyzer, Analyze("What is Simplex?"))
let response = ask(responder, Respond(analysis))
println(response)
}
Memory Efficiency
Both specialists share the same 2 GB SLM through the hive. Without the hive, each would load its own copy (4 GB total).
Run & Test
Running Your Project
# Build and run
$ sxpm run
# Run with debug output
$ sxpm run --debug
# Watch for changes and rebuild
$ sxpm watch
Writing Tests
Create tests/main_test.sx:
use test::*
use my_assistant::{Assistant, Analyzer}
#[test]
fn test_assistant_responds() {
let assistant = spawn Assistant
let reply = ask(assistant, Chat("Hello"))
assert(!reply.is_empty())
}
#[test]
fn test_memory_persists() {
let assistant = spawn Assistant
// First interaction
ask(assistant, Chat("My name is Alice"))
// Second interaction should remember
let reply = ask(assistant, Chat("What is my name?"))
assert(reply.contains("Alice"))
}
Run tests:
$ sxpm test
Running 2 tests...
test_assistant_responds ... ok (0.82s)
test_memory_persists ... ok (1.24s)
All tests passed!