What is the Anima?
The Anima is the cognitive soul of every Simplex AI agent - the beating heart, mind, and memory that gives AI systems personality, continuity, and purpose. While traditional AI systems are stateless (each request starts fresh), the Anima provides:
- Continuity - Memory persists across sessions
- Learning - Agents improve from experience
- Personality - Consistent behavior and values
- Purpose - Goal-directed behavior with intentions
- Adaptability - Beliefs update with evidence
The Anima draws inspiration from cognitive psychology (episodic and semantic memory systems), the BDI architecture (Beliefs, Desires, Intentions) from AI research, and the philosophical concept of an animating soul or spirit.
Creating an Anima
Use the anima keyword to define a cognitive soul:
// Define an anima with identity, memory, beliefs, and persistence
anima AssistantSoul {
identity: {
purpose: "Help users solve programming problems",
personality: "Friendly, precise, and patient",
values: ["accuracy", "helpfulness", "clarity"]
},
memory: {
episodic: EpisodicStore::new(), // Experiences
semantic: SemanticStore::new(), // Facts and knowledge
procedural: ProceduralStore::new(), // Skills and procedures
working: WorkingMemory::new(7) // Short-term context
},
beliefs: {
revision_threshold: 0.3,
contradiction_resolution: "evidence_weighted"
},
persistence: {
path: "/data/assistant.anima",
auto_save: true,
interval: Duration::minutes(5)
}
}
fn main() {
// Create a new anima instance
let soul = anima AssistantSoul
// Or with shorthand for quick prototyping
let quick_soul = anima {
purpose: "Code review assistant",
personality: "Thorough and constructive"
}
// Load existing anima from disk
let restored = Anima::load("/data/assistant.anima")
}
Memory Systems
The Anima has four distinct memory systems, mirroring human cognition:
- Episodic Memory - Autobiographical experiences (what happened, when, context)
- Semantic Memory - Facts, concepts, and learned knowledge
- Procedural Memory - Skills, procedures, and how-to knowledge
- Working Memory - Short-term context with limited capacity (default: 7 items)
fn demonstrate_memory(this: Anima) {
// --- Episodic Memory ---
// Store experiences with optional importance (0.0-1.0)
this.remember("User asked about error handling")
this.remember("Helped debug a null pointer exception", importance: 0.9)
// Rich experience with context
this.remember(Experience {
content: "Reviewed user's authentication code",
timestamp: now(),
context: { file: "auth.sx", user: "alice" },
importance: 0.8
})
// --- Semantic Memory ---
// Store facts with confidence scores
this.learn("Simplex uses actors for concurrency")
this.learn("The Result type has Ok and Err variants", confidence: 0.95)
// Learn with source attribution
this.learn(Knowledge {
content: "HashMap lookup is O(1) average case",
confidence: 0.99,
source: "documentation"
})
// --- Procedural Memory ---
// Store skills and procedures
this.store_procedure("handle_api_error", [
"Check HTTP status code",
"Log error details",
"Determine if retryable",
"Execute retry or fallback"
])
// --- Working Memory ---
// Short-term context (auto-manages capacity)
this.working.push("Current task: review PR #123")
this.working.push("User preference: verbose output")
let context = this.working.context() // Get all working memory
this.working.clear() // Clear when done
}
Episodic Memory: remember() and recall_for()
Episodic memory stores autobiographical experiences. Use remember() to store
and recall_for() for goal-directed retrieval:
fn episodic_operations(this: Anima) {
// Store experiences with importance levels
// High importance (0.8-1.0): Critical decisions, errors, preferences
this.remember("User explicitly requested verbose output", importance: 0.9)
// Medium importance (0.4-0.7): Normal interactions
this.remember("Answered question about async", importance: 0.5)
// Low importance (0.1-0.3): Routine events (may be pruned)
this.remember("Started new session", importance: 0.2)
// Goal-directed recall - searches all memory types
let memories = this.recall_for("error handling")
// Recall with context and limit
let relevant = this.recall_for(
goal: "debug null pointer",
context: "authentication code",
max_results: 10
)
// Recall recent memories
let recent = this.recall_recent(5) // Last 5 memories
// Recall by time range
let today = this.recall_by_time(
start: today(),
end: now()
)
}
Goal-Directed Recall
The recall_for() function searches across all memory types: episodic (similar past
experiences), semantic (relevant facts), and procedural (applicable procedures). This makes it
powerful for assembling context before reasoning.
Semantic Memory: learn() and knows()
Semantic memory stores facts, concepts, and learned knowledge:
fn semantic_operations(this: Anima) {
// Learn facts with confidence scores
this.learn("Simplex compiles to LLVM IR", confidence: 0.95)
this.learn("Error handling uses Result type", confidence: 0.9)
// Learn with source attribution
this.learn(Knowledge {
content: "Actors use message passing for communication",
confidence: 0.95,
source: "documentation"
})
// Check if knowledge exists
if this.knows("actors") {
print("I know about actors!")
}
// Get explanation of a topic
let explanation = this.explain("error handling")
}
Belief System: believe() and revise_belief()
The Anima maintains beliefs that can be formed, strengthened, weakened, and contradicted. Beliefs differ from facts - they represent the agent's current understanding and can change:
fn belief_operations(this: Anima) {
// Form a belief with confidence
this.believe("User prefers concise responses", confidence: 0.7)
// Belief with evidence
this.believe(Belief {
content: "This codebase follows clean architecture",
confidence: 0.8,
evidence: ["Separated domain layer", "Dependency injection used"]
})
// Revise belief based on new evidence
// User asked for more detail - revise our belief about conciseness
this.revise_belief(
"User prefers concise responses",
new_confidence: 0.3,
evidence: "User asked for more detailed explanations"
)
}
// Configure belief handling in anima definition
anima ThoughtfulAgent {
beliefs: {
// How to handle contradictions
contradiction_resolution: "evidence_weighted", // Default
// Other options: "recency_biased", "confidence_threshold", "ask_user"
revision_threshold: 0.3 // Minimum evidence weight to revise
}
}
Belief Thresholds by Level
In the Hive architecture, belief thresholds vary by level:
Anima: 30% - Individual beliefs, flexible and personal
Mnemonic: 50% - Shared beliefs, need consensus
Divine: 70% - Global beliefs, require high confidence
BDI Architecture: desire() and intend()
The Anima implements the Beliefs-Desires-Intentions model from AI research. Desires are goals the agent wants to achieve; intentions are active plans to achieve them:
fn bdi_operations(this: Anima) {
// --- Desires (Goals) ---
// Add desires with priority (0.0-1.0)
this.desire("Help user understand error handling", priority: 0.8)
this.desire("Maintain code quality standards", priority: 0.6)
this.desire("Respond to urgent query", priority: 0.95)
// Get highest priority desire
let top_goal = this.top_desire()
print("Top goal: {top_goal}")
// Mark desire as achieved
this.achieve_desire("Respond to urgent query")
// --- Intentions (Active Plans) ---
// Form an intention with steps
this.intend(Intention {
goal: "Review authentication module",
plan: [
"Read auth.sx",
"Check for security issues",
"Review error handling",
"Suggest improvements"
],
current_step: 0
})
// Advance through the plan
this.advance_intention("Review authentication module")
// Get current step
let step = this.current_step("Review authentication module")
print("Currently on: {step}")
}
Thinking with think()
The think() operation uses an SLM (Small Language Model) for reasoning,
automatically including relevant context from the Anima's memory:
fn reasoning_example(this: Anima) {
// Simple thinking - asks the SLM with anima context
let answer = this.think("What's the best approach for error handling here?")
// Think with explicit context from recall
let context = this.recall_for("refactoring decisions")
let decision = this.think(
question: "Should I suggest refactoring?",
context: context
)
// The SLM receives context like:
// <context>
// Recent experiences: ...
// Known facts: ...
// Current beliefs (confidence > 30%): ...
// </context>
// [Your question here]
}
Persistence: Saving and Loading
Anima state can be saved to disk and loaded later, enabling persistent cognitive agents:
fn persistence_example() {
let path = "/data/assistant.anima"
// Check if save exists and load or create new
let soul = if Anima::exists(path) {
print("Loading existing anima...")
Anima::load(path)
} else {
print("Creating new anima...")
anima AssistantSoul
}
// Work with the anima...
soul.remember("Started new session")
soul.learn("User prefers dark mode", confidence: 0.8)
// Manual save
soul.save(path)
}
// Auto-save configuration in anima definition
anima PersistentAgent {
persistence: {
path: "/data/agent.anima",
auto_save: true,
interval: Duration::minutes(5)
}
}
Integration with Actors: CognitiveActor
The Anima integrates seamlessly with Simplex actors to create cognitive agents:
actor CognitiveAssistant {
anima: AssistantSoul,
tools: ToolRegistry,
receive Chat(message: String) -> String {
// Remember the interaction
self.anima.remember("User: {message}")
// Recall relevant context
let context = self.anima.recall_for(message)
// Think about the response
let response = self.anima.think(
"How should I respond to: {message}",
context: context
)
// Learn from successful interactions
if response.confidence > 0.8 {
self.anima.learn(
"Successful response pattern for: {message.category()}",
confidence: response.confidence
)
}
response.content
}
receive SetGoal(goal: String, priority: f64) {
self.anima.desire(goal, priority: priority)
}
}
Team with Shared Memory
Multiple actors can share an anima with read-write or read-only views:
// Create shared anima for the team
let team_memory = anima TeamKnowledge
// Create views with different permissions
let rw_view = team_memory.view(ReadWrite)
let ro_view = team_memory.view(ReadOnly)
actor Researcher {
anima: rw_view, // Can read and write
receive Research(topic: String) {
let findings = do_research(topic)
self.anima.learn(findings)
self.anima.remember("Researched: {topic}")
}
}
actor Synthesizer {
anima: ro_view, // Can only read
receive Synthesize(query: String) -> Report {
let knowledge = self.anima.recall_for(query)
generate_report(knowledge)
}
}
Specialist with Hive Integration
In the Hive architecture, each specialist has its own Anima while sharing a HiveMnemonic:
hive AnalyticsHive {
specialists: [Analyzer, Summarizer, Critic],
// Shared SLM for all specialists
slm: "simplex-cognitive-7b",
// Shared consciousness (HiveMnemonic)
mnemonic: {
episodic: { capacity: 1000 },
semantic: { capacity: 5000 },
beliefs: { revision_threshold: 50 }, // 50% for shared beliefs
},
}
specialist Analyzer {
// Personal Anima configuration
anima: {
purpose: "Analyze code for issues",
beliefs: { revision_threshold: 30 }, // 30% for personal beliefs
},
receive Analyze(code: String) -> Analysis {
// My Anima + Hive Mnemonic context flows to shared SLM
infer("Find issues in: " + code)
}
receive ShareFinding(finding: String) {
// Personal memory
self.anima.remember("Found: {finding}")
// Share with hive
hive.mnemonic.learn("Analysis finding: {finding}")
}
}
Try It Yourself
Build a cognitive code reviewer that:
- Remembers all code it has reviewed (episodic memory)
- Learns coding patterns and best practices (semantic memory)
- Forms beliefs about code quality ("this codebase follows X pattern")
- Has a goal to improve code quality (desires/intentions)
- Persists its knowledge between sessions
Summary
In this tutorial, you learned:
- The Anima is the cognitive soul giving AI agents memory, beliefs, and purpose
- Creating animas with the
animakeyword and identity configuration - Four memory systems: episodic, semantic, procedural, and working
- Using
remember()andrecall_for()for experiences - Using
learn()andknows()for facts and knowledge - Forming and revising beliefs with
believe()andrevise_belief() - The BDI architecture with
desire()andintend() - SLM-powered reasoning with
think() - Persisting anima state with
save()andload() - Integrating animas with actors and hives
In the next tutorial, we'll explore how to organize code with modules and packages for larger Simplex projects.