Skip to main content

A Python library for the Reflexio

Project description

Reflexio - Make Your Agents Improve Themselves


What is Reflexio?

The moat for AI agents isn't the model — it's what your agent learns from every interaction it handles.

Reflexio is an agent self-improvement platform. It turns every conversation your AI agent has into a learning opportunity — automatically extracting user preferences and behavioral playbooks so your agent continuously improves itself without manual tuning.

flowchart LR
    A[AI Agent] -->|conversations| B[Reflexio]
    G[Human Expert] -->|ideal responses| B
    B --> C[User Profiles]
    B --> D[Playbook Extraction]
    D --> E[Playbook Aggregation]
    B --> F[Success Evaluation]

Publish conversations from your agent, and Reflexio closes the self-improvement loop:

  • Never Repeat the Same Mistake: Transforms user corrections and interaction signals into improved decision-making processes — so agents adapt their behavior and avoid repeating the same mistakes.
  • Lock In What Works: Persists successful strategies and workflows so your agent reuses proven paths instead of starting from scratch.
  • Correct in Real Time: Retrieves personalization and operational signals to fix agent behavior live — no retraining required.
  • Learn from Human Experts: Publish expert-provided ideal responses alongside agent responses — Reflexio automatically extracts actionable playbooks from the differences.
  • Personal & Global Improvements: Separates individual user preferences from system-wide agent improvements.
  • AI First Self-Optimization: Agents autonomously reflect, learn, and improve — less human-in-the-loop, more compounding gains.

For developers: See developer.md for project structure, environment setup, testing, and coding guidelines.

Demo

Reflexio Demo

Quick Start

Prerequisites

Tool Description
uv Python package manager
Node.js >= 18 Frontend runtime

Reflexio Demo

Setup

# 1. Clone and configure
git clone https://github.com/reflexio-ai/reflexio.git
cd reflexio
cp .env.example .env          # Set at least one LLM API key (OpenAI, Anthropic, etc.)

# 2. Install dependencies
uv sync                                   # Python (includes workspace packages)
npm --prefix docs install                  # API docs

# 3. Start services (--storage sqlite is the default)
uv run reflexio services start                    # API (8081), Docs (3000), SQLite storage
uv run reflexio services start --storage supabase # use Supabase storage instead
uv run reflexio services stop                     # Stop all services

Alternative: python -m reflexio.cli services start or ./run_services.sh

Once running, open http://localhost:3000 to interactively browse and try out the API using the built-in docs app.

Publish your first interaction

import reflexio

client = reflexio.ReflexioClient(
    url_endpoint="http://localhost:8081/"
)

# Publish an agent conversation
client.publish_interaction(
    request_id="req-001",
    user_id="user-123",
    interactions=[
        reflexio.Interaction(role="user", content="How do I reset my password?"),
        reflexio.Interaction(role="assistant", content="Go to Settings > Security > Reset Password."),
    ],
)

Reflexio will automatically generate profiles and extract playbooks in the background.

Features

Profile Generation

  • Extracts behavioral profiles from conversations using configurable extractors
  • Supports versioning (current → pending → archived) with upgrade/downgrade workflows
  • Multiple extractors run in parallel with independent windows and strides

Read more about user profiles →

Playbook Extraction & Aggregation

  • Extracts playbooks from user behavior patterns
  • Clusters similar entries and aggregates with LLM (with change detection to skip unchanged clusters)
  • Approval workflow: review and approve/reject agent playbooks

Read more about agent playbooks →

Expert Learning

  • Publish human-expert ideal responses alongside agent responses via the expert_content field
  • Reflexio automatically compares agent vs. expert responses, focusing on substantive differences (missing info, incorrect approach, reasoning gaps) while ignoring stylistic ones
  • Generates actionable playbooks as trigger/instruction/pitfall SOPs that teach the agent what to do differently

Read more about interactions & expert content →

Agent Success Evaluation

  • Session-level evaluation triggered automatically (10 min after last request)
  • Shadow comparison mode: A/B test regular vs shadow agent responses
  • Tool usage analysis for blocking issue detection

Read more about evaluation →

Search & Retrieval

  • Hybrid search (vector + full-text) across profiles and playbooks
  • LLM-powered query rewriting for improved recall
  • Unified search across all entity types in parallel

Multi-Provider LLM Support

  • OpenAI, Anthropic, Google Gemini, OpenRouter, Azure, MiniMax, and custom endpoints
  • Powered by LiteLLM — configure your preferred provider via API keys or custom endpoints

SDK Usage

For detailed API documentation, see the full API reference.

Install the client:

pip install reflexio-client

Basic usage

import reflexio

client = reflexio.ReflexioClient(
    url_endpoint="http://localhost:8081/"
)

# Publish interactions
await client.publish_interaction(
    request_id="req-001",
    user_id="user-123",
    interactions=[...],
    agent_version="v1",       # optional: track agent versions
    session_id="session-abc", # optional: group requests into sessions
)

# Search profiles
profiles = await client.search_profiles(
    reflexio.SearchUserProfileRequest(query="password reset")
)

# Search agent playbooks
playbooks = await client.get_agent_playbooks(
    reflexio.GetAgentPlaybooksRequest(agent_version="v1")
)

Configuration

# Update org configuration
await client.set_config(reflexio.SetConfigRequest(
    config=reflexio.Config(
        api_key_config=reflexio.APIKeyConfig(openai="sk-..."),
        profile_extractor_configs=[...],
        playbook_configs=[reflexio.PlaybookConfig(...)],
    )
))

Architecture

Client (SDK / Web UI)
  → FastAPI Backend
    → Reflexio Orchestrator
      → GenerationService
        ├─ ProfileGenerationService  → Extractor(s) → Deduplicator → Storage
        ├─ PlaybookGenerationService → Extractor(s) → Deduplicator → Storage
        └─ GroupEvaluationScheduler  → Evaluator(s) → Storage (deferred 10 min)

See developer.md for project structure, supported LLM providers, and development setup.

Documentation

For comprehensive guides, examples, and API reference, visit the Reflexio Documentation.

Contributing

We welcome contributions! Please see developer.md for guidelines.

License

This project is currently licensed under Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reflexio_ai-0.2.1.tar.gz (14.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reflexio_ai-0.2.1-py3-none-any.whl (441.2 kB view details)

Uploaded Python 3

File details

Details for the file reflexio_ai-0.2.1.tar.gz.

File metadata

  • Download URL: reflexio_ai-0.2.1.tar.gz
  • Upload date:
  • Size: 14.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for reflexio_ai-0.2.1.tar.gz
Algorithm Hash digest
SHA256 858330c6badf92fab428af158b9efabf3c8222be9f05fdca8174184af3dbd555
MD5 0ef47273f8f0bda463616f0256e04e0c
BLAKE2b-256 92caf55af3da0106ab124099671b1ef2a2112c5294a37f68f76b111e9bb14aa3

See more details on using hashes here.

File details

Details for the file reflexio_ai-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: reflexio_ai-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 441.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for reflexio_ai-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6d1700a2be966416e5e31999f9d93df1783183fda1d148fe48d2565e6de5feb2
MD5 7e9aef8b4edf7aef57ea18d81b613277
BLAKE2b-256 41a625b62c56eff626a93027c9e6da46159726eda1cddf7862fd31c1e0bb612e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page