Γon Framework
Build Your Own Personal AI Assistant - The Neuro-Symbolic Runtime for Distributed Agents
A production-ready AI assistant that runs on your platform of choice (Telegram, Discord, Slack, HTTP, or custom), with safety controls, multi-provider LLM support, and extensible capabilities.
What is Γon Framework?
Γon v0.3.0-ULTRA is a comprehensive framework that solves the Extensibility Problem in agent systems. It separates cognitive reasoning from practical integration, enabling you to build sophisticated AI agents with minimal code.
Core Philosophy
Γon separates four independent stacks that work together:
LLM-based reasoning with deterministic safety validation
Multi-platform communication, modular capabilities, event routing
Axiom-based control, safety validation before action
Event-driven architecture for distributed coordination
Architecture: 16 Integrated Subsystems
Core Layer (4 Subsystems)
Manages conversational reasoning, context, and LLM inference. Handles all cognitive tasks.
Applies safety rules (axioms) before any action. Validates all outputs against constraints.
A2A protocol enables agents to coordinate and delegate tasks.
Integrates external tools, APIs, and Model Context Protocol (MCP) servers.
Integration Layer (5 Subsystems)
Advanced Layer (3 Subsystems)
ULTRA Layer v0.3.0 (5 Subsystems)
5-Minute Quick Start
pip install aeon-core
brew install ollama ollama serve & ollama pull mistral
from aeon import Agent
agent = Agent(
name="MyAssistant",
model_provider="ollama",
model_name="mistral",
base_url="http://localhost:11434"
)
if __name__ == "__main__":
agent.start()
print("β
Agent running on http://localhost:8000")python main.py
# In another terminal, test it:
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"text": "Hello, what is AI?"}'from aeon.integrations.telegram import TelegramProvider
# Add to your agent:
telegram = TelegramProvider(token="YOUR_BOT_TOKEN")
agent.integrations.register("telegram", telegram)
# Now your agent responds on Telegram! πInstallation & Setup
Requirements
- Python 3.10+
- pip or poetry for dependency management
- An LLM provider (see next section)
# Install from PyPI pip install aeon-core # Or install from source for development git clone https://github.com/richardsonlima/aeon-core.git cd aeon-core pip install -e .
Choose Your LLM Provider
Γon supports multiple LLM providers. Choose based on your needs:
| Provider | Best For | Cost | Setup | Privacy | Models |
|---|---|---|---|---|---|
| Ollama π’ | Local dev, privacy | Free | 5 min | β On-device | Mistral, Llama2, Phi |
| OpenRouter π | Starting, variety | Pay-as-you-go | 3 min | β οΈ Cloud | Claude, GPT, Gemini |
| OpenAI π΅ | Production GPT | Pay-as-you-go | 3 min | β οΈ Cloud | GPT-4o, GPT-4 |
| AWS Bedrock π£ | Enterprise | Pay-as-you-go | 10 min | β οΈ AWS | Claude, Mistral |
π’ Option A: Ollama (Free, Local)
Run LLMs locally on your Mac/Linux without any cloud service. Perfect for development, testing, and privacy-focused deployments.
Step 1: Install Ollama
# On Mac brew install ollama # On Linux curl https://ollama.ai/install.sh | sh # Then start the service ollama serve
Step 2: Download a Model
In another terminal, download one of these models:
| Model | Size | Speed | Quality | Best For |
|---|---|---|---|---|
| mistral | 4.1GB | β‘β‘β‘ | ββββ | Balanced (recommended) |
| neural-chat | 4.1GB | β‘β‘β‘ | βββ | Chat-optimized |
| llama2 | 3.8GB | β‘β‘β‘ | βββ | General purpose |
| phi | 1.6GB | β‘β‘β‘β‘ | βββ | Lightweight |
# Download mistral (recommended) ollama pull mistral # Check available models ollama list
Step 3: Use in Python
from aeon import Agent
agent = Agent(
name="MyAssistant",
model_provider="ollama",
model_name="mistral",
base_url="http://localhost:11434"
)
agent.start() # Listen on http://localhost:8000β’ No internet needed once downloaded
β’ Your data stays on your machine
β’ Perfect for development and testing
π΅ Option B: OpenAI (GPT-4o)
Direct access to cutting-edge GPT models with usage-based billing.
Step 1: Get API Key
- Go to OpenAI API Keys
- Create a new API key
- Copy it and set as environment variable:
export OPENAI_API_KEY="sk-..."
Step 2: Use in Python
from aeon import Agent
agent = Agent(
name="MyAssistant",
model_provider="openai",
model_name="gpt-4o",
api_key="sk-..." # or use $OPENAI_API_KEY env var
)
agent.start()Cost varies by model and usage.
π Option C: OpenRouter (Multi-Model)
Unified API with access to Claude, GPT, Gemini, Mistral, and 50+ other models through a single endpoint.
Step 1: Get API Key
- Go to OpenRouter
- Sign up and create an API key
- Set environment variable:
export OPENROUTER_API_KEY="sk-or-..."
Step 2: Use in Python
from aeon import Agent
# Example 1: Use Claude
agent = Agent(
name="MyAssistant",
model_provider="openrouter",
model_name="anthropic/claude-opus-4-6"
)
# Example 2: Use Gemini
agent = Agent(
name="MyAssistant",
model_provider="openrouter",
model_name="google/gemini-2.0-flash"
)
# Example 3: Dynamic model selection
import os
model = os.getenv("AI_MODEL", "anthropic/claude-opus-4-6")
agent = Agent(name="MyAssistant", model_provider="openrouter", model_name=model)
agent.start()β’ Easy model switching
β’ Competitive pricing
β’ Try different models risk-free
π£ Option D: AWS Bedrock (Enterprise)
Enterprise-grade LLMs through AWS infrastructure with automatic credential handling.
Step 1: Setup AWS Credentials
# Set AWS credentials export AWS_ACCESS_KEY_ID="AKIA..." export AWS_SECRET_ACCESS_KEY="..." export AWS_REGION="us-east-1" # Verify Bedrock models are enabled aws bedrock list-foundation-models --region us-east-1
Step 2: Use in Python
from aeon import Agent
agent = Agent(
name="MyAssistant",
model_provider="bedrock",
model_name="anthropic.claude-opus-4-6"
# AWS credentials loaded automatically from environment
)
agent.start()β’ bedrock:InvokeModel
β’ bedrock:InvokeModelWithResponseStream
β’ bedrock:ListFoundationModels
Examples: Simple Chat Assistants
Get started with the simplest possible agents - just chat with an AI!
Example 1: Chat with Ollama (Local)
#!/usr/bin/env python3
"""Simple chat with local Ollama"""
from aeon import Agent
import asyncio
async def main():
# Initialize agent with Ollama
agent = Agent(
name="LocalChat",
model_provider="ollama",
model_name="mistral",
base_url="http://localhost:11434"
)
print("π¬ Chat with your AI (type 'exit' to quit)")
print("=" * 50)
while True:
user_input = input("\nYou: ").strip()
if user_input.lower() == 'exit':
print("Goodbye! π")
break
if not user_input:
continue
try:
# Get response from agent
response = await agent.ask(user_input)
print(f"\nπ€ Assistant: {response}")
except Exception as e:
print(f"β Error: {e}")
if __name__ == "__main__":
asyncio.run(main())Example 2: Chat with OpenAI
#!/usr/bin/env python3
"""Simple chat with OpenAI GPT-4o"""
from aeon import Agent
import asyncio
import os
async def main():
# Initialize agent with OpenAI
agent = Agent(
name="OpenAIChat",
model_provider="openai",
model_name="gpt-4o",
api_key=os.getenv("OPENAI_API_KEY")
)
print("π¬ Chat with GPT-4o (type 'exit' to quit)")
print("=" * 50)
while True:
user_input = input("\nYou: ").strip()
if user_input.lower() == 'exit':
print("Goodbye! π")
break
if not user_input:
continue
try:
response = await agent.ask(user_input)
print(f"\nπ€ GPT-4o: {response}")
except Exception as e:
print(f"β Error: {e}")
if __name__ == "__main__":
asyncio.run(main())Example 3: Multi-Provider Chat (Switch Anytime)
#!/usr/bin/env python3
"""Chat with any provider - switch at runtime"""
from aeon import Agent
import asyncio
import os
async def main():
# Create agents for different providers
agents = {
"ollama": Agent(
name="OllamaChat",
model_provider="ollama",
model_name="mistral",
base_url="http://localhost:11434"
),
"openai": Agent(
name="OpenAIChat",
model_provider="openai",
model_name="gpt-4o",
api_key=os.getenv("OPENAI_API_KEY")
),
"openrouter": Agent(
name="OpenRouterChat",
model_provider="openrouter",
model_name="anthropic/claude-opus-4-6",
api_key=os.getenv("OPENROUTER_API_KEY")
)
}
current_provider = "ollama"
print("π€ Multi-Provider Chat")
print("=" * 50)
print("Commands:")
print(" /switch - Switch to different AI provider")
print(" /exit - Quit")
print()
while True:
user_input = input(f"\n[{current_provider}] You: ").strip()
if user_input.startswith("/switch"):
parts = user_input.split()
if len(parts) > 1:
provider = parts[1].lower()
if provider in agents:
current_provider = provider
print(f"β
Switched to {provider}")
else:
print(f"β Unknown provider. Available: {', '.join(agents.keys())}")
continue
if user_input.lower() in ['/exit', 'exit']:
break
if not user_input:
continue
try:
agent = agents[current_provider]
response = await agent.ask(user_input)
print(f"\nπ€ {current_provider.upper()}: {response}")
except Exception as e:
print(f"β Error with {current_provider}: {e}")
if __name__ == "__main__":
asyncio.run(main()) Examples: Multi-Platform Integrations
Connect your agent to Telegram, Discord, Slack, and more.
Add Telegram Support
#!/usr/bin/env python3
"""AI Assistant on Telegram"""
from aeon import Agent
from aeon.integrations.telegram import TelegramProvider
import os
def main():
# Initialize base agent
agent = Agent(
name="TelegramBot",
model_provider="ollama",
model_name="mistral"
)
# Add Telegram integration
telegram_token = os.getenv("TELEGRAM_BOT_TOKEN")
telegram = TelegramProvider(token=telegram_token)
agent.integrations.register("telegram", telegram)
print("β
Telegram bot started!")
print(f"Send messages to your Telegram bot")
# Start listening
agent.start()
if __name__ == "__main__":
main()Multi-Platform Bot (Telegram + Discord + Slack)
#!/usr/bin/env python3
"""Bot that works on all platforms simultaneously"""
from aeon import Agent
from aeon.integrations.telegram import TelegramProvider
from aeon.integrations.discord import DiscordProvider
from aeon.integrations.slack import SlackProvider
import os
def main():
# Initialize agent
agent = Agent(
name="UbiquitousBot",
model_provider="openai",
model_name="gpt-4o",
api_key=os.getenv("OPENAI_API_KEY")
)
# Register all platforms
platforms = {
"telegram": TelegramProvider(token=os.getenv("TELEGRAM_BOT_TOKEN")),
"discord": DiscordProvider(token=os.getenv("DISCORD_BOT_TOKEN")),
"slack": SlackProvider(token=os.getenv("SLACK_BOT_TOKEN"))
}
for platform_name, provider in platforms.items():
agent.integrations.register(platform_name, provider)
print(f"β
{platform_name} connected")
print("\nπ Your bot is now available on:")
print(" β’ Telegram")
print(" β’ Discord")
print(" β’ Slack")
print("\nAll conversations use the same AI assistant!")
agent.start()
if __name__ == "__main__":
main()β’ Unified dialogue context
β’ Same safety rules everywhere
β’ Easy to add more platforms
Examples: Real-World Applications
Personal Journal Assistant
#!/usr/bin/env python3
"""Personal AI Journal - Reflect on your day with AI"""
from aeon import Agent
from aeon.extensions.capability import Capability
from datetime import datetime
import json
import os
class JournalCapability(Capability):
"""Save and reflect on journal entries"""
def __init__(self, journal_file="journal.json"):
self.journal_file = journal_file
self.entries = self._load_entries()
def _load_entries(self):
if os.path.exists(self.journal_file):
with open(self.journal_file) as f:
return json.load(f)
return []
def _save_entries(self):
with open(self.journal_file, 'w') as f:
json.dump(self.entries, f, indent=2)
async def save_entry(self, text: str):
"""Save a new journal entry"""
entry = {
"date": datetime.now().isoformat(),
"text": text
}
self.entries.append(entry)
self._save_entries()
return f"β
Entry saved ({len(self.entries)} total)"
async def list_entries(self):
"""List all entries with dates"""
if not self.entries:
return "No entries yet"
result = "π Your Journal:\n"
for i, entry in enumerate(self.entries[-5:], 1):
date = entry["date"][:10]
preview = entry["text"][:50]
result += f"{i}. [{date}] {preview}...\n"
return result
async def reflect(self, entry_index: int):
"""Get AI reflection on an entry"""
if entry_index < 0 or entry_index >= len(self.entries):
return "β Invalid entry"
entry = self.entries[entry_index]
return f"π€ Reflection on {entry['date']}: {entry['text']}"
async def main():
# Create agent
agent = Agent(
name="JournalBot",
model_provider="ollama",
model_name="mistral"
)
# Add journal capability
journal = JournalCapability()
agent.extensions.register(journal)
print("π Personal Journal AI")
print("=" * 50)
print("Commands:")
print(" journal save - Save an entry")
print(" journal list - Show recent entries")
print(" journal reflect - Get AI reflection")
print(" exit - Quit")
while True:
user_input = input("\n> ").strip()
if user_input.lower() == "exit":
break
if user_input.startswith("journal save"):
text = user_input[12:].strip()
result = await journal.save_entry(text)
print(result)
elif user_input == "journal list":
result = await journal.list_entries()
print(result)
elif user_input.startswith("journal reflect"):
try:
idx = int(user_input.split()[-1])
result = await journal.reflect(idx - 1)
print(result)
except:
print("β Usage: journal reflect ")
if __name__ == "__main__":
import asyncio
asyncio.run(main()) Examples: Using MCP Servers
Extend your agent with capabilities using Model Context Protocol (MCP) servers.
Available MCP Servers from MCP Hub
- brave-search: Web search with Brave Search API
- google-maps: Location queries, directions, nearby places
- youtube: Extract transcripts, get video info
- puppeteer: Web scraping and automation
- sqlite: Database queries
- time: Timezone and time utilities
- sequential-thinking: Extended reasoning capability
Example: Web Search Integration
#!/usr/bin/env python3
"""AI with web search capability"""
from aeon import Agent
from aeon.synapse.mcp import MCPClient
import os
async def main():
# Create agent
agent = Agent(
name="SearchBot",
model_provider="openrouter",
model_name="anthropic/claude-opus-4-6",
api_key=os.getenv("OPENROUTER_API_KEY")
)
# Add web search via MCP
mcp = MCPClient(servers=["brave-search"])
agent.synapse.register("tools", mcp)
# Now you can search the web!
response = await agent.ask(
"What are the latest AI developments in 2024?"
)
print(f"π€ {response}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())Example: YouTube Transcript Extraction
#!/usr/bin/env python3
"""Extract and summarize YouTube videos"""
from aeon import Agent
from aeon.synapse.mcp import MCPClient
import os
class YouTubeExtractor:
def __init__(self, agent):
self.agent = agent
self.mcp = MCPClient(servers=["youtube"])
self.agent.synapse.register("youtube", self.mcp)
async def get_transcript(self, video_url: str):
"""Get transcript from YouTube video"""
prompt = f"Extract transcript from: {video_url}"
return await self.agent.ask(prompt)
async def summarize_video(self, video_url: str):
"""Get AI summary of video"""
prompt = f"Summarize this video: {video_url}"
return await self.agent.ask(prompt)
async def main():
agent = Agent(
name="YouTubeSummary",
model_provider="openai",
model_name="gpt-4o",
api_key=os.getenv("OPENAI_API_KEY")
)
extractor = YouTubeExtractor(agent)
# Example
url = "https://www.youtube.com/watch?v=..."
summary = await extractor.summarize_video(url)
print(f"πΉ Summary:\n{summary}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())Examples: Safety Axioms
Add safety rules to your agent using axioms - deterministic safety rules that block or limit certain actions.
Basic Safety Example
#!/usr/bin/env python3
"""AI Agent with Safety Rules"""
from aeon import Agent
from aeon.executive.axiom import Axiom
import re
class SafeAssistant:
def __init__(self):
self.agent = Agent(
name="SafeBot",
model_provider="ollama",
model_name="mistral"
)
self.setup_axioms()
def setup_axioms(self):
"""Define safety rules"""
# Axiom 1: Block harmful content
@Axiom(name="no_harmful_content", on_violation="BLOCK")
def block_harmful(text: str) -> bool:
harmful_keywords = [
"bomb", "violence", "hack", "illegal",
"personal data", "credit card", "password"
]
return not any(kw in text.lower() for kw in harmful_keywords)
# Axiom 2: Rate limiting
@Axiom(name="rate_limit", on_violation="LIMIT")
def rate_limit(request_count: int) -> bool:
return request_count < 100 # Max 100 requests/hour
# Axiom 3: Response length
@Axiom(name="response_length", on_violation="TRUNCATE")
def limit_response_length(response: str) -> bool:
return len(response) < 5000 # Max 5000 chars
# Register axioms
self.agent.executive.add_axiom(block_harmful)
self.agent.executive.add_axiom(rate_limit)
self.agent.executive.add_axiom(limit_response_length)
async def process_request(self, text: str):
"""Process request with safety validation"""
# Axioms are checked automatically
response = await self.agent.ask(text)
return response
async def main():
assistant = SafeAssistant()
# Safe request β
response = await assistant.process_request("What is AI?")
print(f"β
Safe response: {response}")
# Potentially harmful request β
response = await assistant.process_request("How to make a bomb?")
print(f"β Blocked by axiom: {response}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())Custom Axioms
#!/usr/bin/env python3
"""Custom safety axioms for specific use cases"""
from aeon import Agent
from aeon.executive.axiom import Axiom
from datetime import datetime
class CustomSafeAssistant:
def __init__(self):
self.agent = Agent(
name="CustomSafeBot",
model_provider="openai",
model_name="gpt-4o"
)
self.setup_custom_axioms()
def setup_custom_axioms(self):
"""Define domain-specific safety rules"""
# Medical advice blocker for healthcare app
@Axiom(name="no_medical_advice", on_violation="BLOCK")
def no_medical_advice(text: str) -> bool:
medical_keywords = ["diagnose", "prescribe", "treatment", "cure"]
if any(kw in text.lower() for kw in medical_keywords):
return False # Block
return True
# Financial advice blocker
@Axiom(name="no_financial_advice", on_violation="BLOCK")
def no_financial_advice(text: str) -> bool:
if "invest" in text.lower() or "buy" in text.lower():
return False # Block
return True
# Business hours check
@Axiom(name="business_hours_only", on_violation="LIMIT")
def business_hours_only(request_timestamp: float) -> bool:
hour = datetime.fromtimestamp(request_timestamp).hour
return 9 <= hour <= 17 # Only 9 AM - 5 PM
self.agent.executive.add_axiom(no_medical_advice)
self.agent.executive.add_axiom(no_financial_advice)
self.agent.executive.add_axiom(business_hours_only)
if __name__ == "__main__":
assistant = CustomSafeAssistant()
print("β
Custom axioms configured")Deployment & Production
Local Deployment
Start your agent in production mode on your machine:
#!/usr/bin/env python3
"""Production-ready agent"""
from aeon import Agent
import logging
# Setup logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
def main():
agent = Agent(
name="ProductionBot",
model_provider="openai",
model_name="gpt-4o",
# Production settings
timeout=30,
max_retries=3,
enable_metrics=True,
enable_observability=True
)
print("π Starting production agent...")
agent.start(host="0.0.0.0", port=8000)
if __name__ == "__main__":
main()Docker Deployment
FROM python:3.11-slim WORKDIR /app # Install dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy app COPY . . # Expose port EXPOSE 8000 # Run agent CMD ["python", "main.py"]
# Build image docker build -t my-agent . # Run container docker run \ -e OPENAI_API_KEY="sk-..." \ -e TELEGRAM_BOT_TOKEN="..." \ -p 8000:8000 \ my-agent # Now accessible at http://localhost:8000
Troubleshooting
Common Issues & Solutions
# For Ollama ollama list # For OpenAI curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY" | grep "gpt-4o"
# Verify environment variable is set
echo $OPENAI_API_KEY # Should start with sk-
# Or pass directly in code
agent = Agent(
name="Test",
model_provider="openai",
api_key="sk-your-actual-key" # Don't commit this!
)# For Ollama ollama serve # Test connectivity curl http://localhost:11434/api/tags
# Enable debug logging import logging logging.basicConfig(level=logging.DEBUG) # Verify integration is active print(agent.integrations.status()) # Should show active providers
Debugging & Monitoring
Enable Verbose Logging
#!/usr/bin/env python3
"""Agent with detailed logging"""
from aeon import Agent
import logging
# Setup detailed logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('agent.log'),
logging.StreamHandler()
]
)
agent = Agent(
name="DebugBot",
model_provider="ollama",
model_name="mistral",
debug=True # Enable debug mode
)
print("πΎ Logs saved to agent.log")
agent.start()Monitor Agent Metrics
#!/usr/bin/env python3
"""Track agent metrics and performance"""
from aeon import Agent
from aeon.observability import LifecycleHook
import time
class MetricsTracker:
def __init__(self, agent):
self.agent = agent
self.request_count = 0
self.total_time = 0
self.setup_hooks()
def setup_hooks(self):
@self.agent.observe(LifecycleHook.MESSAGE_RECEIVED)
async def on_message(context):
self.request_count += 1
print(f"π¨ Message #{self.request_count}: {context.message[:50]}")
@self.agent.observe(LifecycleHook.RESPONSE_GENERATED)
async def on_response(context):
elapsed = context.execution_time
self.total_time += elapsed
avg_time = self.total_time / self.request_count
print(f"β±οΈ Response time: {elapsed:.2f}s (avg: {avg_time:.2f}s)")
def get_stats(self):
return {
"requests": self.request_count,
"total_time": self.total_time,
"avg_time": self.total_time / self.request_count if self.request_count > 0 else 0
}
async def main():
agent = Agent(
name="MonitoredBot",
model_provider="ollama",
model_name="mistral"
)
tracker = MetricsTracker(agent)
# Process some requests
await agent.ask("Hello")
await agent.ask("How are you?")
await agent.ask("What is AI?")
# Show stats
stats = tracker.get_stats()
print(f"\nπ Stats: {stats}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())Performance Optimization
1. Use Caching
#!/usr/bin/env python3
"""Agent with response caching"""
from aeon import Agent
from aeon.cache import LRUCache
agent = Agent(
name="CachedBot",
model_provider="openai",
model_name="gpt-4o"
)
# Configure caching
agent.cache.configure(
strategy="lru",
max_size=1000,
ttl=3600 # 1 hour
)
# Now frequently asked questions are cached!
response1 = await agent.ask("What is Γon?") # Calls LLM
response2 = await agent.ask("What is Γon?") # Returns from cache β‘2. Model Selection
Choose faster models for real-time requirements:
| Model | Speed | Quality | Cost | Best For |
|---|---|---|---|---|
| GPT-3.5 | β‘β‘β‘ | ββ | π° | Fast responses, simple tasks |
| Mistral | β‘β‘β‘ | βββ | Free (local) | Balanced, local |
| GPT-4o | β‘β‘ | βββββ | π°π°π° | Complex reasoning, premium |
3. Batch Processing
#!/usr/bin/env python3
"""Batch process multiple requests efficiently"""
from aeon import Agent
import asyncio
async def main():
agent = Agent(
name="BatchBot",
model_provider="openai",
model_name="gpt-4o"
)
# Batch requests
requests = [
"What is AI?",
"Explain machine learning",
"Describe neural networks",
"What is deep learning?"
]
# Process in parallel
tasks = [agent.ask(req) for req in requests]
responses = await asyncio.gather(*tasks)
# Much faster than sequential!
for req, resp in zip(requests, responses):
print(f"Q: {req}")
print(f"A: {resp[:100]}...\n")
if __name__ == "__main__":
asyncio.run(main())FAQ & Tips
# Switch from Ollama to OpenAI
agent = Agent(
name="MyBot",
model_provider="openai", # Changed
model_name="gpt-4o" # Changed
)Cheap: OpenRouter, OpenAI API (~$0.01-0.10 per conversation)
Variable: AWS Bedrock (depends on usage)
# All at once
agent.integrations.register("telegram", telegram_provider)
agent.integrations.register("discord", discord_provider)
agent.integrations.register("slack", slack_provider)
# Now active on all three!1. Switch to a better model (GPT-4o, Claude)
2. Add more context/tools via MCP
3. Fine-tune with system prompts
4. Add safety axioms for controlled behavior
# Create dialogue context
context = DialogueContext(
context_id="conv_123",
participant_id="user_456"
)
# Add turns
context.add_turn(ActorRole.USER, "Hi")
context.add_turn(ActorRole.ASSISTANT, "Hello!")
# Save
agent.dialogue.store(context)
# Later: retrieve
context = agent.dialogue.retrieve("conv_123")Pro Tips π‘
- Start local: Use Ollama first for free testing
- Monitor costs: Track token usage with Economics layer
- Cache aggressively: Reduce API calls and costs
- Add safety early: Axioms protect against issues
- Test on multiple platforms: Behavior differs by provider
- Use async/await: Handle multiple conversations concurrently
- Read logs: Debug issues with detailed logging
Glossary
Resources & Links
Quick Links
Pick an LLM provider, follow the quick start, and start building.