AI Primitives
Simplex provides built-in AI operations through the ai module:
fn main() {
// Text completion
let response = await ai::complete("Explain AI in one sentence:")
print(response)
// Embeddings for semantic search
let embedding = await ai::embed("machine learning")
// Similarity comparison
let similarity = ai::cosine_similarity(
await ai::embed("cat"),
await ai::embed("kitten")
)
print("Similarity: {similarity}") // ~0.92
}
Type-Safe Extraction
The real power is extracting structured data with type safety:
struct ContactInfo {
name: String,
email: Option<String>,
phone: Option<String>,
company: Option<String>
}
fn main() {
let text = "Hi, I'm Sarah Chen from Acme Corp.
You can reach me at sarah@acme.com or 555-0123."
// AI extracts and validates the type
let contact: ContactInfo = await ai::extract(text)
print("Name: {contact.name}") // Sarah Chen
print("Email: {contact.email}") // Some(sarah@acme.com)
print("Company: {contact.company}") // Some(Acme Corp)
}
Type-Safe Extraction
The compiler knows the expected type. If the AI can't extract valid data,
you get a Result error - not runtime crashes or invalid data.
Classification
Classify content into predefined categories:
enum Sentiment {
Positive,
Negative,
Neutral
}
enum Priority {
Low,
Medium,
High,
Critical
}
fn analyze_feedback(text: String) {
// Classify into your enum variants
let sentiment: Sentiment = await ai::classify(text)
let priority: Priority = await ai::classify(text)
match (sentiment, priority) {
(Sentiment::Negative, Priority::Critical) => {
print("URGENT: Escalate immediately!")
},
(Sentiment::Positive, _) => {
print("Great feedback received")
},
_ => {
print("Normal processing")
}
}
}
Building a Document Pipeline
Let's combine actors and AI to build a document processing pipeline:
struct Document {
id: String,
content: String
}
struct ProcessedDoc {
id: String,
summary: String,
entities: List<Entity>,
sentiment: Sentiment,
embedding: Vector<384>
}
struct Entity {
name: String,
entity_type: String
}
enum Sentiment { Positive, Negative, Neutral }
actor DocumentProcessor {
receive Process(doc: Document) -> ProcessedDoc {
// Run AI operations in parallel
let (summary, entities, sentiment, embedding) = await parallel(
ai::complete("Summarize: {doc.content}"),
ai::extract<List<Entity>>(doc.content),
ai::classify<Sentiment>(doc.content),
ai::embed(doc.content)
)
ProcessedDoc {
id: doc.id,
summary,
entities,
sentiment,
embedding
}
}
}
fn main() {
let processor = spawn DocumentProcessor
let doc = Document {
id: "doc-1",
content: "Apple announced record earnings today.
CEO Tim Cook praised the strong iPhone sales
in the Greater China region."
}
let result = ask(processor, Process(doc))
print("Summary: {result.summary}")
print("Entities: {result.entities}")
print("Sentiment: {result.sentiment}")
}
Model Selection
Choose the right model for each task:
// Fast tier - quick classification (7B model)
let sentiment = await ai::classify<Sentiment>(
text,
model: "fast" // 10-50ms latency
)
// Default tier - balanced (70B model)
let summary = await ai::complete(prompt) // Uses default
// Quality tier - complex reasoning
let analysis = await ai::complete(
complex_prompt,
model: "quality" // Claude/GPT-4 class
)
Error Handling
AI operations can fail - handle errors gracefully:
fn safe_extract(text: String) -> Option<ContactInfo> {
match await ai::extract<ContactInfo>(text) {
Ok(contact) => Some(contact),
Err(AIError::ExtractionFailed(reason)) => {
print("Couldn't extract: {reason}")
None
},
Err(AIError::RateLimited) => {
print("Rate limited, retrying...")
Thread::sleep(Duration::seconds(1))
safe_extract(text) // Retry
},
Err(e) => {
print("AI error: {e}")
None
}
}
}
Try It Yourself
Build a customer support ticket classifier that:
- Extracts the customer's name and issue description
- Classifies the ticket priority (Low/Medium/High/Critical)
- Assigns it to the right department (Sales/Technical/Billing)
- Generates a suggested response
Summary
In this tutorial, you learned:
- Using AI primitives:
complete,embed,extract,classify - Type-safe extraction into your structs
- Classification into enum variants
- Running AI operations in parallel
- Selecting model tiers for cost/quality tradeoffs
- Handling AI errors gracefully
In the final tutorial, we'll build a complete Cognitive Hive with multiple specialized AI workers.