What is CHAI?
Cognitive Hive AI (CHAI) orchestrates multiple specialized Small Language Models working together. Instead of one large model doing everything, you have:
- Specialists - Fine-tuned models for specific tasks
- Router - Directs requests to the right specialist
- Shared Memory - Common context accessible to all specialists
Defining Specialists
Specialists wrap SLMs with task-specific behavior:
// Summarization specialist
specialist Summarizer {
model: "summarization-7b",
domain: "text summarization",
temperature: 0.3,
receive Summarize(text: String, style: SummaryStyle) -> String {
let prompt = match style {
SummaryStyle::Brief => "Summarize in 1-2 sentences: {text}",
SummaryStyle::Detailed => "Provide a detailed summary: {text}",
SummaryStyle::Bullets => "Summarize as bullet points: {text}"
}
infer(prompt)
}
}
// Entity extraction specialist
specialist EntityExtractor {
model: "ner-7b",
domain: "named entity recognition",
temperature: 0.1,
receive Extract(text: String) -> List<Entity> {
infer_structured<List<Entity>>(
"Extract all named entities: {text}"
)
}
}
// Sentiment analysis specialist
specialist SentimentAnalyzer {
model: "sentiment-3b",
domain: "sentiment analysis",
temperature: 0.0,
receive Analyze(text: String) -> SentimentResult {
infer_structured<SentimentResult>(
"Analyze sentiment: {text}"
)
}
}
Creating a Hive
Combine specialists into a hive with routing:
hive ContentHive {
// Specialists in this hive
specialists: [
Summarizer,
EntityExtractor,
SentimentAnalyzer,
TopicClassifier,
LanguageDetector
],
// How to route requests
router: SemanticRouter {
embeddings: "routing-embeddings",
fallback: Summarizer
},
// Shared context
memory: SharedMemory {
capacity: 10000,
ttl: Duration::hours(24)
}
}
Using the Hive
Send requests to the hive - it routes automatically:
fn main() {
let hive = spawn ContentHive
let article = "Apple Inc. announced today that CEO Tim Cook
will present the new iPhone 16 at their Cupertino
headquarters next month. Analysts expect strong sales."
// Direct specialist call
let summary = ask(hive, Summarize(article, SummaryStyle::Brief))
// Route based on intent
let result = ask(hive, Process("What companies are mentioned?", article))
// Router sends to EntityExtractor
// Parallel analysis
let analysis = ask(hive, FullAnalysis(article))
// Runs all relevant specialists
}
Routing Strategies
Choose how requests get to the right specialist:
| Strategy | How It Works | Best For |
|---|---|---|
SemanticRouter |
Embeds request, finds closest specialist | Natural language queries |
KeywordRouter |
Matches keywords to specialists | Known task types |
ClassifierRouter |
Uses a classifier model to route | Complex routing needs |
RoundRobinRouter |
Distributes evenly | Load balancing identical workers |
Shared Memory
Specialists can share context through the hive's memory:
specialist ContextAwareResponder {
model: "chat-7b",
receive Respond(query: String, context_key: String) -> String {
// Access shared memory
let context = memory::get(context_key)
let prompt = match context {
Some(ctx) => "Context: {ctx}\n\nQuery: {query}",
None => query
}
let response = infer(prompt)
// Store interaction in memory
memory::append(context_key, "Q: {query}\nA: {response}")
response
}
}
Complete Example: Support Bot
Let's build a customer support hive:
// Specialist definitions
specialist IntentClassifier {
model: "intent-3b",
receive Classify(msg: String) -> Intent { ... }
}
specialist TechnicalSupport {
model: "tech-support-7b",
receive Help(issue: String) -> String { ... }
}
specialist BillingSupport {
model: "billing-7b",
receive Help(issue: String) -> String { ... }
}
specialist GeneralAssistant {
model: "assistant-7b",
receive Chat(msg: String) -> String { ... }
}
// The hive
hive SupportHive {
specialists: [
IntentClassifier,
TechnicalSupport,
BillingSupport,
GeneralAssistant
],
router: IntentRouter {
classifier: IntentClassifier,
routes: {
Intent::Technical => TechnicalSupport,
Intent::Billing => BillingSupport,
Intent::General => GeneralAssistant
}
},
memory: ConversationMemory {
per_user: true,
max_turns: 10
}
}
// Using it
fn handle_message(hive: HiveRef, user_id: String, message: String) -> String {
ask(hive, HandleSupport {
user_id,
message,
include_history: true
})
}
Cost Comparison
This hive uses 5 specialized SLMs (3-7B parameters each) instead of one large model. Result: ~90% cost reduction with better task-specific performance.
Final Project
Build a content moderation hive with:
- A
ToxicityDetectorspecialist - A
SpamClassifierspecialist - A
PIIDetectorfor personal information - A
ContentSummarizerfor reports - Shared memory for tracking repeat offenders
Summary
Congratulations! You've completed the Simplex tutorial series. You learned:
- Simplex basics: variables, functions, control flow
- The type system: structs, enums, pattern matching
- The actor model: spawning, messages, supervision
- AI integration: extraction, classification, embeddings
- CHAI: specialists, routing, shared memory