New in v0.7.0
The simplex-learning library enables AI specialists to adapt in real-time.
This completes the vision of truly adaptive cognitive agents.
Overview
Traditional ML requires offline batch training: collect data, train model, deploy, repeat. Real-time learning lets specialists adapt continuously from user feedback without taking them offline. Each interaction makes them smarter.
Key Benefits
- Immediate adaptation - Learn from each interaction
- No retraining cycles - Continuous improvement
- Personalization - Adapt to individual user preferences
- Safety constraints - Fallbacks prevent runaway learning
OnlineLearner
The core component for real-time learning:
use simplex_learning::{OnlineLearner, StreamingAdam, SafeFallback};
// Create an online learner
let learner = OnlineLearner::new(model_params)
.optimizer(StreamingAdam::new(0.001))
.constraint(MaxLatency(10.0))
.fallback(SafeFallback::with_default(default_output));
// Learn from each interaction
for (input, feedback) in interactions {
let output = learner.forward(&input);
learner.learn(&feedback); // Adapts in real-time
}
Streaming Optimizers
Optimizers designed for single-sample or small-batch updates:
use simplex_learning::optim::{StreamingSGD, StreamingAdam, AdamW};
// Streaming SGD with momentum
let sgd = StreamingSGD::new(0.01)
.momentum(0.9)
.weight_decay(0.0001)
.max_grad_norm(1.0); // Automatic gradient clipping
// Streaming Adam with gradient accumulation
let adam = StreamingAdam::new(0.001)
.betas(0.9, 0.999)
.accumulation_steps(4); // Mini-batch accumulation
// AdamW with decoupled weight decay
let adamw = AdamW::new(0.001)
.weight_decay(0.01);
Safety Constraints & Fallbacks
Ensure learning doesn't destabilize your specialists:
use simplex_learning::safety::{ConstraintManager, SafeLearner, SafeFallback};
// Define constraints
let constraints = ConstraintManager::new()
.add_soft(MaxLatency("latency", 10.0, penalty_weight: 0.5))
.add_hard(NoLossExplosion("loss", 100.0));
// Safe learner with fallback
let safe_learner = SafeLearner::new(learner, SafeFallback::with_default(safe_output))
.with_validator(|output| output.is_valid())
.max_failures(3);
// Process with safety checks
match safe_learner.try_process(&input, compute_fn) {
Ok(output) => use_output(output),
Err(SafetyError::NoFallbackAvailable { failures }) => {
log_error("Failed after {failures} attempts");
}
}
Fallback Strategies
| Strategy | Description |
|---|---|
Default |
Return predefined safe output |
LastGood |
Return last successful output |
Function |
Execute custom fallback logic |
Checkpoint |
Restore from saved state |
SkipUpdate |
Continue without learning |
Federated Learning
Coordinate learning across multiple specialists in a hive:
use simplex_learning::distributed::{FederatedLearner, AggregationStrategy};
let federated = FederatedLearner::new(config, initial_params);
// Specialists submit updates
federated.submit_update(NodeUpdate {
node_id: "specialist_1",
params: local_params,
sample_count: 100,
validation_acc: 0.85,
});
// Aggregation happens automatically when min_nodes reached
let global_params = federated.global_params();
Aggregation Strategies
| Strategy | Description | Use Case |
|---|---|---|
FedAvg |
Simple averaging | Homogeneous data |
WeightedAvg |
Weighted by sample count | Varying dataset sizes |
PerformanceWeighted |
Weighted by validation accuracy | Quality-focused |
Median |
Byzantine-resilient median | Adversarial settings |
TrimmedMean |
Trimmed mean (top/bottom 10%) | Outlier robustness |
AttentionWeighted |
Similarity-weighted | Heterogeneous specialists |
Integrating with Specialists
Add learning capabilities to your cognitive specialists:
use simplex_learning::{OnlineLearner, SafeFallback, FeedbackSignal};
specialist SecurityAnalyzer {
model: "simplex-cognitive-7b";
learner: OnlineLearner;
fn init() {
self.learner = OnlineLearner::new(self.params())
.optimizer(StreamingAdam::new(0.001))
.fallback(SafeFallback::with_default(Analysis::unknown()));
}
fn analyze(code: String) -> Analysis {
let result = infer("Analyze for security issues: {code}");
result
}
// Learn from user feedback
fn feedback(analysis: Analysis, correct: Bool) {
let signal = FeedbackSignal::from_binary(correct);
self.learner.learn(&signal);
}
}
// Usage
fn main() {
let analyzer = spawn SecurityAnalyzer;
// Analyze code
let result = ask(analyzer, Analyze(user_code));
// User provides feedback
send(analyzer, Feedback(result, user_approved));
// Specialist learns and improves!
}
Checkpointing
Save and restore learner state for fault tolerance:
use simplex_learning::runtime::Checkpoint;
// Manual checkpoint
Checkpoint::save("model_v1.ckpt", &learner)?;
// Restore from checkpoint
let learner = Checkpoint::load("model_v1.ckpt")?;
// Automatic checkpointing
let learner = OnlineLearner::new(params)
.checkpoint_every(1000)
.checkpoint_path("checkpoints/");
Best Practices
When to Use Online Learning
- User preference adaptation
- Dynamic environment response
- Continuous improvement from feedback
- Personalization at scale
When NOT to Use Online Learning
- Safety-critical decisions (use verified models)
- Stable, well-understood tasks
- Limited or noisy feedback signal
- Regulatory requirements for model versioning
Safety Guidelines
- Always use SafeFallback for production
- Set reasonable constraint bounds
- Monitor gradient norms and loss values
- Use checkpointing for recovery
- Test fallback paths thoroughly
Congratulations!
You've completed all 12 Simplex tutorials! You now understand the full power of Simplex for building adaptive, AI-native applications.