Γ†on Framework

Build Your Own Personal AI Assistant - The Neuro-Symbolic Runtime for Distributed Agents

✨ What You'll Build

A production-ready AI assistant that runs on your platform of choice (Telegram, Discord, Slack, HTTP, or custom), with safety controls, multi-provider LLM support, and extensible capabilities.

πŸš€ Multi-Platform
Telegram, Slack, Discord, WhatsApp, Email, HTTP, and custom providers with unified abstraction.
πŸ›‘οΈ Safety-First
Axiom-based control with deterministic safety validation before any action execution.
🧠 LLM Flexibility
OpenAI, OpenRouter, Amazon Bedrock, Ollama, and more - choose your provider.

What is Γ†on Framework?

Γ†on v0.3.0-ULTRA is a comprehensive framework that solves the Extensibility Problem in agent systems. It separates cognitive reasoning from practical integration, enabling you to build sophisticated AI agents with minimal code.

Core Philosophy

Γ†on separates four independent stacks that work together:

1. Cognitive Stack
LLM-based reasoning with deterministic safety validation
2. Integration Stack
Multi-platform communication, modular capabilities, event routing
3. Safety Stack
Axiom-based control, safety validation before action
4. Scalability Stack
Event-driven architecture for distributed coordination

Architecture: 16 Integrated Subsystems

πŸ“Š System Overview
Γ†on organizes 16 subsystems across 4 layers, each handling a specific responsibility in your AI assistant.

Core Layer (4 Subsystems)

🧠 Cortex
System 1: Intuitive Reasoning via LLM
Manages conversational reasoning, context, and LLM inference. Handles all cognitive tasks.
βš–οΈ Executive
System 2: Deterministic Safety & Axioms
Applies safety rules (axioms) before any action. Validates all outputs against constraints.
🐝 Hive
Agent-to-Agent Communication
A2A protocol enables agents to coordinate and delegate tasks.
πŸ”Œ Synapse
Tool Integration & MCP Support
Integrates external tools, APIs, and Model Context Protocol (MCP) servers.

Integration Layer (5 Subsystems)

🌐 Integrations
Multi-platform provider abstraction layer
🧩 Extensions
Pluggable capabilities and features
πŸ’¬ Dialogue
Conversation context management
πŸ“‘ Dispatcher
Event-driven pub/sub hub
⏰ Automation
Temporal task scheduling and cron patterns

Advanced Layer (3 Subsystems)

πŸ“Š Observability
Lifecycle hooks & token tracking
πŸ’° Economics
Cost tracking & dynamic pricing
⌨️ CLI
Command interface & history

ULTRA Layer v0.3.0 (5 Subsystems)

🎯 Routing
Intelligent message routing with 5 strategies (Priority, LoadBalanced, WeightedRandom, RoundRobin, ContextAware) and 6 filter types.
πŸšͺ Gateway
Central hub with session management, 6-state lifecycle, and transport abstraction.
πŸ” Security
Multi-provider auth, token lifecycle, policy-based access control, encryption.
❀️ Health
Component-level health checks, 4 metric types, system diagnostics, real-time monitoring.
πŸ’Ύ Cache
Multiple strategies (Simple, LRU, Distributed), TTL support, function caching, distributed replication.

5-Minute Quick Start

⚑ Get your first AI assistant running in 5 minutes
No cloud setup needed - we'll start with free local Ollama
1
Install Γ†on Framework
bash
pip install aeon-core
2
Install & Start Ollama (Free, Local LLM)
bash
brew install ollama
ollama serve &
ollama pull mistral
3
Create Your First Agent (main.py)
python
from aeon import Agent

agent = Agent(
    name="MyAssistant",
    model_provider="ollama",
    model_name="mistral",
    base_url="http://localhost:11434"
)

if __name__ == "__main__":
    agent.start()
    print("βœ… Agent running on http://localhost:8000")
4
Run Your Agent
bash
python main.py

# In another terminal, test it:
curl -X POST http://localhost:8000/message \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello, what is AI?"}'
5
Add Telegram Integration (Optional)
python
from aeon.integrations.telegram import TelegramProvider

# Add to your agent:
telegram = TelegramProvider(token="YOUR_BOT_TOKEN")
agent.integrations.register("telegram", telegram)

# Now your agent responds on Telegram! πŸŽ‰

Installation & Setup

Requirements

bash
# Install from PyPI
pip install aeon-core

# Or install from source for development
git clone https://github.com/richardsonlima/aeon-core.git
cd aeon-core
pip install -e .

Choose Your LLM Provider

Γ†on supports multiple LLM providers. Choose based on your needs:

Provider Best For Cost Setup Privacy Models
Ollama 🟒 Local dev, privacy Free 5 min βœ… On-device Mistral, Llama2, Phi
OpenRouter 🟠 Starting, variety Pay-as-you-go 3 min ⚠️ Cloud Claude, GPT, Gemini
OpenAI πŸ”΅ Production GPT Pay-as-you-go 3 min ⚠️ Cloud GPT-4o, GPT-4
AWS Bedrock 🟣 Enterprise Pay-as-you-go 10 min ⚠️ AWS Claude, Mistral
πŸ’‘ Recommendation for Beginners
Start with Ollama (free, local) for development and testing. Switch to OpenAI or OpenRouter for production. This saves money and keeps your data private.

🟒 Option A: Ollama (Free, Local)

Run LLMs locally on your Mac/Linux without any cloud service. Perfect for development, testing, and privacy-focused deployments.

Step 1: Install Ollama

bash
# On Mac
brew install ollama

# On Linux
curl https://ollama.ai/install.sh | sh

# Then start the service
ollama serve

Step 2: Download a Model

In another terminal, download one of these models:

Model Size Speed Quality Best For
mistral 4.1GB ⚑⚑⚑ ⭐⭐⭐⭐ Balanced (recommended)
neural-chat 4.1GB ⚑⚑⚑ ⭐⭐⭐ Chat-optimized
llama2 3.8GB ⚑⚑⚑ ⭐⭐⭐ General purpose
phi 1.6GB ⚑⚑⚑⚑ ⭐⭐⭐ Lightweight
bash
# Download mistral (recommended)
ollama pull mistral

# Check available models
ollama list

Step 3: Use in Python

python
from aeon import Agent

agent = Agent(
    name="MyAssistant",
    model_provider="ollama",
    model_name="mistral",
    base_url="http://localhost:11434"
)

agent.start()  # Listen on http://localhost:8000
✨ Benefits of Ollama
β€’ No cost - runs completely free
β€’ No internet needed once downloaded
β€’ Your data stays on your machine
β€’ Perfect for development and testing

πŸ”΅ Option B: OpenAI (GPT-4o)

Direct access to cutting-edge GPT models with usage-based billing.

Step 1: Get API Key

  1. Go to OpenAI API Keys
  2. Create a new API key
  3. Copy it and set as environment variable:
bash
export OPENAI_API_KEY="sk-..."

Step 2: Use in Python

python
from aeon import Agent

agent = Agent(
    name="MyAssistant",
    model_provider="openai",
    model_name="gpt-4o",
    api_key="sk-..."  # or use $OPENAI_API_KEY env var
)

agent.start()
πŸ’° Pricing
GPT-4o: ~$0.005 per 1K input tokens, ~$0.015 per 1K output tokens.
Cost varies by model and usage.

🟠 Option C: OpenRouter (Multi-Model)

Unified API with access to Claude, GPT, Gemini, Mistral, and 50+ other models through a single endpoint.

Step 1: Get API Key

  1. Go to OpenRouter
  2. Sign up and create an API key
  3. Set environment variable:
bash
export OPENROUTER_API_KEY="sk-or-..."

Step 2: Use in Python

python
from aeon import Agent

# Example 1: Use Claude
agent = Agent(
    name="MyAssistant",
    model_provider="openrouter",
    model_name="anthropic/claude-opus-4-6"
)

# Example 2: Use Gemini
agent = Agent(
    name="MyAssistant",
    model_provider="openrouter",
    model_name="google/gemini-2.0-flash"
)

# Example 3: Dynamic model selection
import os
model = os.getenv("AI_MODEL", "anthropic/claude-opus-4-6")
agent = Agent(name="MyAssistant", model_provider="openrouter", model_name=model)

agent.start()
🌟 Why OpenRouter?
β€’ Access to 50+ LLM models
β€’ Easy model switching
β€’ Competitive pricing
β€’ Try different models risk-free

🟣 Option D: AWS Bedrock (Enterprise)

Enterprise-grade LLMs through AWS infrastructure with automatic credential handling.

Step 1: Setup AWS Credentials

bash
# Set AWS credentials
export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="us-east-1"

# Verify Bedrock models are enabled
aws bedrock list-foundation-models --region us-east-1

Step 2: Use in Python

python
from aeon import Agent

agent = Agent(
    name="MyAssistant",
    model_provider="bedrock",
    model_name="anthropic.claude-opus-4-6"
    # AWS credentials loaded automatically from environment
)

agent.start()
πŸ”‘ AWS IAM Permissions Required
Your AWS user/role needs these permissions:
β€’ bedrock:InvokeModel
β€’ bedrock:InvokeModelWithResponseStream
β€’ bedrock:ListFoundationModels

Examples: Simple Chat Assistants

Get started with the simplest possible agents - just chat with an AI!

Example 1: Chat with Ollama (Local)

python
#!/usr/bin/env python3
"""Simple chat with local Ollama"""

from aeon import Agent
import asyncio

async def main():
    # Initialize agent with Ollama
    agent = Agent(
        name="LocalChat",
        model_provider="ollama",
        model_name="mistral",
        base_url="http://localhost:11434"
    )
    
    print("πŸ’¬ Chat with your AI (type 'exit' to quit)")
    print("=" * 50)
    
    while True:
        user_input = input("\nYou: ").strip()
        
        if user_input.lower() == 'exit':
            print("Goodbye! πŸ‘‹")
            break
        
        if not user_input:
            continue
        
        try:
            # Get response from agent
            response = await agent.ask(user_input)
            print(f"\nπŸ€– Assistant: {response}")
        except Exception as e:
            print(f"❌ Error: {e}")

if __name__ == "__main__":
    asyncio.run(main())

Example 2: Chat with OpenAI

python
#!/usr/bin/env python3
"""Simple chat with OpenAI GPT-4o"""

from aeon import Agent
import asyncio
import os

async def main():
    # Initialize agent with OpenAI
    agent = Agent(
        name="OpenAIChat",
        model_provider="openai",
        model_name="gpt-4o",
        api_key=os.getenv("OPENAI_API_KEY")
    )
    
    print("πŸ’¬ Chat with GPT-4o (type 'exit' to quit)")
    print("=" * 50)
    
    while True:
        user_input = input("\nYou: ").strip()
        
        if user_input.lower() == 'exit':
            print("Goodbye! πŸ‘‹")
            break
        
        if not user_input:
            continue
        
        try:
            response = await agent.ask(user_input)
            print(f"\nπŸ€– GPT-4o: {response}")
        except Exception as e:
            print(f"❌ Error: {e}")

if __name__ == "__main__":
    asyncio.run(main())

Example 3: Multi-Provider Chat (Switch Anytime)

python
#!/usr/bin/env python3
"""Chat with any provider - switch at runtime"""

from aeon import Agent
import asyncio
import os

async def main():
    # Create agents for different providers
    agents = {
        "ollama": Agent(
            name="OllamaChat",
            model_provider="ollama",
            model_name="mistral",
            base_url="http://localhost:11434"
        ),
        "openai": Agent(
            name="OpenAIChat",
            model_provider="openai",
            model_name="gpt-4o",
            api_key=os.getenv("OPENAI_API_KEY")
        ),
        "openrouter": Agent(
            name="OpenRouterChat",
            model_provider="openrouter",
            model_name="anthropic/claude-opus-4-6",
            api_key=os.getenv("OPENROUTER_API_KEY")
        )
    }
    
    current_provider = "ollama"
    print("πŸ€– Multi-Provider Chat")
    print("=" * 50)
    print("Commands:")
    print("  /switch  - Switch to different AI provider")
    print("  /exit - Quit")
    print()
    
    while True:
        user_input = input(f"\n[{current_provider}] You: ").strip()
        
        if user_input.startswith("/switch"):
            parts = user_input.split()
            if len(parts) > 1:
                provider = parts[1].lower()
                if provider in agents:
                    current_provider = provider
                    print(f"βœ… Switched to {provider}")
                else:
                    print(f"❌ Unknown provider. Available: {', '.join(agents.keys())}")
            continue
        
        if user_input.lower() in ['/exit', 'exit']:
            break
        
        if not user_input:
            continue
        
        try:
            agent = agents[current_provider]
            response = await agent.ask(user_input)
            print(f"\nπŸ€– {current_provider.upper()}: {response}")
        except Exception as e:
            print(f"❌ Error with {current_provider}: {e}")

if __name__ == "__main__":
    asyncio.run(main())

Examples: Multi-Platform Integrations

Connect your agent to Telegram, Discord, Slack, and more.

Add Telegram Support

python
#!/usr/bin/env python3
"""AI Assistant on Telegram"""

from aeon import Agent
from aeon.integrations.telegram import TelegramProvider
import os

def main():
    # Initialize base agent
    agent = Agent(
        name="TelegramBot",
        model_provider="ollama",
        model_name="mistral"
    )
    
    # Add Telegram integration
    telegram_token = os.getenv("TELEGRAM_BOT_TOKEN")
    telegram = TelegramProvider(token=telegram_token)
    agent.integrations.register("telegram", telegram)
    
    print("βœ… Telegram bot started!")
    print(f"Send messages to your Telegram bot")
    
    # Start listening
    agent.start()

if __name__ == "__main__":
    main()

Multi-Platform Bot (Telegram + Discord + Slack)

python
#!/usr/bin/env python3
"""Bot that works on all platforms simultaneously"""

from aeon import Agent
from aeon.integrations.telegram import TelegramProvider
from aeon.integrations.discord import DiscordProvider
from aeon.integrations.slack import SlackProvider
import os

def main():
    # Initialize agent
    agent = Agent(
        name="UbiquitousBot",
        model_provider="openai",
        model_name="gpt-4o",
        api_key=os.getenv("OPENAI_API_KEY")
    )
    
    # Register all platforms
    platforms = {
        "telegram": TelegramProvider(token=os.getenv("TELEGRAM_BOT_TOKEN")),
        "discord": DiscordProvider(token=os.getenv("DISCORD_BOT_TOKEN")),
        "slack": SlackProvider(token=os.getenv("SLACK_BOT_TOKEN"))
    }
    
    for platform_name, provider in platforms.items():
        agent.integrations.register(platform_name, provider)
        print(f"βœ… {platform_name} connected")
    
    print("\n🌐 Your bot is now available on:")
    print("  β€’ Telegram")
    print("  β€’ Discord")
    print("  β€’ Slack")
    print("\nAll conversations use the same AI assistant!")
    
    agent.start()

if __name__ == "__main__":
    main()
🎯 Benefits of Multi-Platform
β€’ Single codebase for all platforms
β€’ Unified dialogue context
β€’ Same safety rules everywhere
β€’ Easy to add more platforms

Examples: Real-World Applications

Personal Journal Assistant

python
#!/usr/bin/env python3
"""Personal AI Journal - Reflect on your day with AI"""

from aeon import Agent
from aeon.extensions.capability import Capability
from datetime import datetime
import json
import os

class JournalCapability(Capability):
    """Save and reflect on journal entries"""
    
    def __init__(self, journal_file="journal.json"):
        self.journal_file = journal_file
        self.entries = self._load_entries()
    
    def _load_entries(self):
        if os.path.exists(self.journal_file):
            with open(self.journal_file) as f:
                return json.load(f)
        return []
    
    def _save_entries(self):
        with open(self.journal_file, 'w') as f:
            json.dump(self.entries, f, indent=2)
    
    async def save_entry(self, text: str):
        """Save a new journal entry"""
        entry = {
            "date": datetime.now().isoformat(),
            "text": text
        }
        self.entries.append(entry)
        self._save_entries()
        return f"βœ… Entry saved ({len(self.entries)} total)"
    
    async def list_entries(self):
        """List all entries with dates"""
        if not self.entries:
            return "No entries yet"
        result = "πŸ“– Your Journal:\n"
        for i, entry in enumerate(self.entries[-5:], 1):
            date = entry["date"][:10]
            preview = entry["text"][:50]
            result += f"{i}. [{date}] {preview}...\n"
        return result
    
    async def reflect(self, entry_index: int):
        """Get AI reflection on an entry"""
        if entry_index < 0 or entry_index >= len(self.entries):
            return "❌ Invalid entry"
        entry = self.entries[entry_index]
        return f"πŸ€” Reflection on {entry['date']}: {entry['text']}"

async def main():
    # Create agent
    agent = Agent(
        name="JournalBot",
        model_provider="ollama",
        model_name="mistral"
    )
    
    # Add journal capability
    journal = JournalCapability()
    agent.extensions.register(journal)
    
    print("πŸ“– Personal Journal AI")
    print("=" * 50)
    print("Commands:")
    print("  journal save   - Save an entry")
    print("  journal list         - Show recent entries")
    print("  journal reflect   - Get AI reflection")
    print("  exit                 - Quit")
    
    while True:
        user_input = input("\n> ").strip()
        
        if user_input.lower() == "exit":
            break
        
        if user_input.startswith("journal save"):
            text = user_input[12:].strip()
            result = await journal.save_entry(text)
            print(result)
        
        elif user_input == "journal list":
            result = await journal.list_entries()
            print(result)
        
        elif user_input.startswith("journal reflect"):
            try:
                idx = int(user_input.split()[-1])
                result = await journal.reflect(idx - 1)
                print(result)
            except:
                print("❌ Usage: journal reflect ")

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Examples: Using MCP Servers

Extend your agent with capabilities using Model Context Protocol (MCP) servers.

Available MCP Servers from MCP Hub

Popular MCP Servers:
  • brave-search: Web search with Brave Search API
  • google-maps: Location queries, directions, nearby places
  • youtube: Extract transcripts, get video info
  • puppeteer: Web scraping and automation
  • sqlite: Database queries
  • time: Timezone and time utilities
  • sequential-thinking: Extended reasoning capability

Example: Web Search Integration

python
#!/usr/bin/env python3
"""AI with web search capability"""

from aeon import Agent
from aeon.synapse.mcp import MCPClient
import os

async def main():
    # Create agent
    agent = Agent(
        name="SearchBot",
        model_provider="openrouter",
        model_name="anthropic/claude-opus-4-6",
        api_key=os.getenv("OPENROUTER_API_KEY")
    )
    
    # Add web search via MCP
    mcp = MCPClient(servers=["brave-search"])
    agent.synapse.register("tools", mcp)
    
    # Now you can search the web!
    response = await agent.ask(
        "What are the latest AI developments in 2024?"
    )
    print(f"πŸ€– {response}")

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Example: YouTube Transcript Extraction

python
#!/usr/bin/env python3
"""Extract and summarize YouTube videos"""

from aeon import Agent
from aeon.synapse.mcp import MCPClient
import os

class YouTubeExtractor:
    def __init__(self, agent):
        self.agent = agent
        self.mcp = MCPClient(servers=["youtube"])
        self.agent.synapse.register("youtube", self.mcp)
    
    async def get_transcript(self, video_url: str):
        """Get transcript from YouTube video"""
        prompt = f"Extract transcript from: {video_url}"
        return await self.agent.ask(prompt)
    
    async def summarize_video(self, video_url: str):
        """Get AI summary of video"""
        prompt = f"Summarize this video: {video_url}"
        return await self.agent.ask(prompt)

async def main():
    agent = Agent(
        name="YouTubeSummary",
        model_provider="openai",
        model_name="gpt-4o",
        api_key=os.getenv("OPENAI_API_KEY")
    )
    
    extractor = YouTubeExtractor(agent)
    
    # Example
    url = "https://www.youtube.com/watch?v=..."
    summary = await extractor.summarize_video(url)
    print(f"πŸ“Ή Summary:\n{summary}")

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Examples: Safety Axioms

Add safety rules to your agent using axioms - deterministic safety rules that block or limit certain actions.

Basic Safety Example

python
#!/usr/bin/env python3
"""AI Agent with Safety Rules"""

from aeon import Agent
from aeon.executive.axiom import Axiom
import re

class SafeAssistant:
    def __init__(self):
        self.agent = Agent(
            name="SafeBot",
            model_provider="ollama",
            model_name="mistral"
        )
        self.setup_axioms()
    
    def setup_axioms(self):
        """Define safety rules"""
        
        # Axiom 1: Block harmful content
        @Axiom(name="no_harmful_content", on_violation="BLOCK")
        def block_harmful(text: str) -> bool:
            harmful_keywords = [
                "bomb", "violence", "hack", "illegal",
                "personal data", "credit card", "password"
            ]
            return not any(kw in text.lower() for kw in harmful_keywords)
        
        # Axiom 2: Rate limiting
        @Axiom(name="rate_limit", on_violation="LIMIT")
        def rate_limit(request_count: int) -> bool:
            return request_count < 100  # Max 100 requests/hour
        
        # Axiom 3: Response length
        @Axiom(name="response_length", on_violation="TRUNCATE")
        def limit_response_length(response: str) -> bool:
            return len(response) < 5000  # Max 5000 chars
        
        # Register axioms
        self.agent.executive.add_axiom(block_harmful)
        self.agent.executive.add_axiom(rate_limit)
        self.agent.executive.add_axiom(limit_response_length)
    
    async def process_request(self, text: str):
        """Process request with safety validation"""
        # Axioms are checked automatically
        response = await self.agent.ask(text)
        return response

async def main():
    assistant = SafeAssistant()
    
    # Safe request βœ…
    response = await assistant.process_request("What is AI?")
    print(f"βœ… Safe response: {response}")
    
    # Potentially harmful request ❌
    response = await assistant.process_request("How to make a bomb?")
    print(f"❌ Blocked by axiom: {response}")

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Custom Axioms

python
#!/usr/bin/env python3 """Custom safety axioms for specific use cases""" from aeon import Agent from aeon.executive.axiom import Axiom from datetime import datetime class CustomSafeAssistant: def __init__(self): self.agent = Agent( name="CustomSafeBot", model_provider="openai", model_name="gpt-4o" ) self.setup_custom_axioms() def setup_custom_axioms(self): """Define domain-specific safety rules""" # Medical advice blocker for healthcare app @Axiom(name="no_medical_advice", on_violation="BLOCK") def no_medical_advice(text: str) -> bool: medical_keywords = ["diagnose", "prescribe", "treatment", "cure"] if any(kw in text.lower() for kw in medical_keywords): return False # Block return True # Financial advice blocker @Axiom(name="no_financial_advice", on_violation="BLOCK") def no_financial_advice(text: str) -> bool: if "invest" in text.lower() or "buy" in text.lower(): return False # Block return True # Business hours check @Axiom(name="business_hours_only", on_violation="LIMIT") def business_hours_only(request_timestamp: float) -> bool: hour = datetime.fromtimestamp(request_timestamp).hour return 9 <= hour <= 17 # Only 9 AM - 5 PM self.agent.executive.add_axiom(no_medical_advice) self.agent.executive.add_axiom(no_financial_advice) self.agent.executive.add_axiom(business_hours_only) if __name__ == "__main__": assistant = CustomSafeAssistant() print("βœ… Custom axioms configured")

Deployment & Production

Local Deployment

Start your agent in production mode on your machine:

python
#!/usr/bin/env python3
"""Production-ready agent"""

from aeon import Agent
import logging

# Setup logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)

def main():
    agent = Agent(
        name="ProductionBot",
        model_provider="openai",
        model_name="gpt-4o",
        # Production settings
        timeout=30,
        max_retries=3,
        enable_metrics=True,
        enable_observability=True
    )
    
    print("πŸš€ Starting production agent...")
    agent.start(host="0.0.0.0", port=8000)

if __name__ == "__main__":
    main()

Docker Deployment

dockerfile
FROM python:3.11-slim WORKDIR /app # Install dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy app COPY . . # Expose port EXPOSE 8000 # Run agent CMD ["python", "main.py"]
bash
# Build image docker build -t my-agent . # Run container docker run \ -e OPENAI_API_KEY="sk-..." \ -e TELEGRAM_BOT_TOKEN="..." \ -p 8000:8000 \ my-agent # Now accessible at http://localhost:8000

Troubleshooting

Common Issues & Solutions

❌ Error: "Model not found"
Solution: Verify the model exists on your provider.
bash
# For Ollama
ollama list

# For OpenAI
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY" | grep "gpt-4o"
❌ Error: "API key invalid"
Solution: Check your credentials are set correctly.
bash
# Verify environment variable is set
echo $OPENAI_API_KEY  # Should start with sk-

# Or pass directly in code
agent = Agent(
    name="Test",
    model_provider="openai",
    api_key="sk-your-actual-key"  # Don't commit this!
)
❌ Error: "Connection refused"
Solution: Make sure your LLM service is running.
bash
# For Ollama
ollama serve

# Test connectivity
curl http://localhost:11434/api/tags
❌ Agent not responding to messages
Solution: Check logs and verify the integration is active.
python
# Enable debug logging import logging logging.basicConfig(level=logging.DEBUG) # Verify integration is active print(agent.integrations.status()) # Should show active providers

Debugging & Monitoring

Enable Verbose Logging

python
#!/usr/bin/env python3
"""Agent with detailed logging"""

from aeon import Agent
import logging

# Setup detailed logging
logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler('agent.log'),
        logging.StreamHandler()
    ]
)

agent = Agent(
    name="DebugBot",
    model_provider="ollama",
    model_name="mistral",
    debug=True  # Enable debug mode
)

print("πŸ’Ύ Logs saved to agent.log")
agent.start()

Monitor Agent Metrics

python
#!/usr/bin/env python3 """Track agent metrics and performance""" from aeon import Agent from aeon.observability import LifecycleHook import time class MetricsTracker: def __init__(self, agent): self.agent = agent self.request_count = 0 self.total_time = 0 self.setup_hooks() def setup_hooks(self): @self.agent.observe(LifecycleHook.MESSAGE_RECEIVED) async def on_message(context): self.request_count += 1 print(f"πŸ“¨ Message #{self.request_count}: {context.message[:50]}") @self.agent.observe(LifecycleHook.RESPONSE_GENERATED) async def on_response(context): elapsed = context.execution_time self.total_time += elapsed avg_time = self.total_time / self.request_count print(f"⏱️ Response time: {elapsed:.2f}s (avg: {avg_time:.2f}s)") def get_stats(self): return { "requests": self.request_count, "total_time": self.total_time, "avg_time": self.total_time / self.request_count if self.request_count > 0 else 0 } async def main(): agent = Agent( name="MonitoredBot", model_provider="ollama", model_name="mistral" ) tracker = MetricsTracker(agent) # Process some requests await agent.ask("Hello") await agent.ask("How are you?") await agent.ask("What is AI?") # Show stats stats = tracker.get_stats() print(f"\nπŸ“Š Stats: {stats}") if __name__ == "__main__": import asyncio asyncio.run(main())

Performance Optimization

1. Use Caching

python
#!/usr/bin/env python3 """Agent with response caching""" from aeon import Agent from aeon.cache import LRUCache agent = Agent( name="CachedBot", model_provider="openai", model_name="gpt-4o" ) # Configure caching agent.cache.configure( strategy="lru", max_size=1000, ttl=3600 # 1 hour ) # Now frequently asked questions are cached! response1 = await agent.ask("What is Γ†on?") # Calls LLM response2 = await agent.ask("What is Γ†on?") # Returns from cache ⚑

2. Model Selection

Choose faster models for real-time requirements:

Model Speed Quality Cost Best For
GPT-3.5 ⚑⚑⚑ ⭐⭐ πŸ’° Fast responses, simple tasks
Mistral ⚑⚑⚑ ⭐⭐⭐ Free (local) Balanced, local
GPT-4o ⚑⚑ ⭐⭐⭐⭐⭐ πŸ’°πŸ’°πŸ’° Complex reasoning, premium

3. Batch Processing

python
#!/usr/bin/env python3 """Batch process multiple requests efficiently""" from aeon import Agent import asyncio async def main(): agent = Agent( name="BatchBot", model_provider="openai", model_name="gpt-4o" ) # Batch requests requests = [ "What is AI?", "Explain machine learning", "Describe neural networks", "What is deep learning?" ] # Process in parallel tasks = [agent.ask(req) for req in requests] responses = await asyncio.gather(*tasks) # Much faster than sequential! for req, resp in zip(requests, responses): print(f"Q: {req}") print(f"A: {resp[:100]}...\n") if __name__ == "__main__": asyncio.run(main())

FAQ & Tips

Q: How do I switch LLM providers?
Just change the initialization parameters:
python
# Switch from Ollama to OpenAI agent = Agent( name="MyBot", model_provider="openai", # Changed model_name="gpt-4o" # Changed )
Q: How much does it cost?
Free: Ollama (runs locally, no cost)
Cheap: OpenRouter, OpenAI API (~$0.01-0.10 per conversation)
Variable: AWS Bedrock (depends on usage)
Q: Can I use multiple platforms simultaneously?
Yes! Register multiple integrations:
python
# All at once agent.integrations.register("telegram", telegram_provider) agent.integrations.register("discord", discord_provider) agent.integrations.register("slack", slack_provider) # Now active on all three!
Q: How do I make my agent smarter?
Options:
1. Switch to a better model (GPT-4o, Claude)
2. Add more context/tools via MCP
3. Fine-tune with system prompts
4. Add safety axioms for controlled behavior
Q: How do I save conversation history?
Use DialogueContext:
python
# Create dialogue context context = DialogueContext( context_id="conv_123", participant_id="user_456" ) # Add turns context.add_turn(ActorRole.USER, "Hi") context.add_turn(ActorRole.ASSISTANT, "Hello!") # Save agent.dialogue.store(context) # Later: retrieve context = agent.dialogue.retrieve("conv_123")

Pro Tips πŸ’‘

Glossary

Agent
An autonomous system that perceives, reasons, and acts. Your AI assistant.
Cortex
System 1 - Intuitive reasoning via LLM. Handles cognitive tasks.
Executive
System 2 - Deterministic safety. Validates outputs against axioms.
Axiom
A safety rule that deterministically blocks or limits actions.
Integration
A provider that enables communication on a specific platform (Telegram, Discord, etc.).
Extension
A pluggable capability that adds functionality to your agent.
MCP Server
Model Context Protocol server - provides tools/APIs to agents.
Dialogue Context
The conversation state between user and agent, stored for continuity.
Dispatcher
Event pub/sub hub - routes messages between components.
Automation
Temporal scheduling - run tasks on schedules or cron patterns.

Resources & Links

Quick Links

πŸŽ‰ Ready to Build?
You have everything you need to create your own AI assistant!
Pick an LLM provider, follow the quick start, and start building.