Skip to main content

A Python library for the Reflexio

Project description

Reflexio - Make Your Agents Improve Themselves


What is Reflexio?

The moat for AI agents isn't the model — it's what your agent learns from every interaction it handles.

Reflexio is a self-improvement platform for AI agents. It turns every conversation your AI agent has into a learning opportunity — automatically extracting user preferences and behavioral playbooks so your agent continuously improves itself without manual tuning.

flowchart LR
    A[AI Agent] -->|conversations| B[Reflexio]
    G[Human Expert] -->|ideal responses| B
    B --> C[User Profiles]
    B --> D[Playbook Extraction]
    D --> E[Playbook Aggregation]
    B --> F[Success Evaluation]

Publish conversations from your agent, and Reflexio closes the self-improvement loop:

  • Never Repeat the Same Mistake: Transforms user corrections and interaction signals into improved decision-making processes — so agents adapt their behavior and avoid repeating the same mistakes.
  • Lock In What Works: Persists successful strategies and workflows so your agent reuses proven paths instead of starting from scratch.
  • Correct in Real Time: Retrieves personalization and operational signals to fix agent behavior live — no retraining required.
  • Learn from Human Experts: Publish expert-provided ideal responses alongside agent responses — Reflexio automatically extracts actionable playbooks from the differences.
  • Personal & Global Improvements: Separates individual user preferences from system-wide agent improvements.
  • AI First Self-Optimization: Agents autonomously reflect, learn, and improve — less human-in-the-loop, more compounding gains.

For developers: See developer.md for project structure, environment setup, testing, and coding guidelines.

Demo

Reflexio example experience

Quick Start

Prerequisites

Tool Description
uv Python package manager
Node.js >= 18 Frontend runtime

Reflexio Demo

Setup

# 1. Clone and configure
git clone https://github.com/reflexio-ai/reflexio.git
cd reflexio
cp .env.example .env          # Set at least one LLM API key (OpenAI, Anthropic, etc.)

# 2. Install dependencies
uv sync                                   # Python (includes workspace packages)
npm --prefix docs install                  # API docs

# 3. Start services (--storage sqlite is the default)
uv run reflexio services start                    # API (8081), Docs (8082), SQLite storage
uv run reflexio services stop                     # Stop all services

Alternative: python -m reflexio.cli services start or ./run_services.sh

Once running, open http://localhost:8082 to interactively browse and try out the API.

Reflexio Doc Website

Try it in 30 seconds (CLI)

Reflexio ships a first-class CLI — the fastest way to see the loop end-to-end with no code. Publish a real multi-turn conversation where the user corrects the agent (that's the signal Reflexio learns from), then search for what was extracted:

uv run reflexio publish --user-id alice --wait --data '{
  "interactions": [
    {"role": "user",      "content": "Deploy the new service."},
    {"role": "assistant", "content": "Starting deployment to us-east-1..."},
    {"role": "user",      "content": "Wait — we never deploy production to us-east-1. Always use us-west-2."},
    {"role": "assistant", "content": "Understood. Switching to us-west-2."}
  ]
}'

# Search the extracted profiles and playbooks
uv run reflexio search "deployment region"

One conversation, two artifacts: a user profile (production region is us-west-2) and an agent playbook (confirm region before deploying). See the CLI reference for all input modes (inline JSON, --file, --stdin) and the full command list.

Integrate with the Python SDK

import reflexio

client = reflexio.ReflexioClient(
    url_endpoint="http://localhost:8081/"
)

# Publish a multi-turn conversation where the user corrects the agent —
# Reflexio extracts a profile ("prod region = us-west-2") and a playbook
# ("confirm region before deploying").
client.publish_interaction(
    request_id="req-001",
    user_id="alice",
    interactions=[
        reflexio.Interaction(role="user",      content="Deploy the new service."),
        reflexio.Interaction(role="assistant", content="Starting deployment to us-east-1..."),
        reflexio.Interaction(role="user",      content="Wait — we never deploy production to us-east-1. Always use us-west-2."),
        reflexio.Interaction(role="assistant", content="Understood. Switching to us-west-2."),
    ],
)

Reflexio will automatically generate profiles and extract playbooks in the background.

Features

Profile Generation

  • Extracts behavioral profiles from conversations using configurable extractors
  • Supports versioning (current → pending → archived) with upgrade/downgrade workflows
  • Multiple extractors run in parallel with independent windows and strides

Read more about user profiles →

Playbook Extraction & Aggregation

  • Extracts playbooks from user behavior patterns
  • Clusters similar entries and aggregates with LLM (with change detection to skip unchanged clusters)
  • Approval workflow: review and approve/reject agent playbooks

Read more about agent playbooks →

Expert Learning

  • Publish human-expert ideal responses alongside agent responses via the expert_content field
  • Reflexio automatically compares agent vs. expert responses, focusing on substantive differences (missing info, incorrect approach, reasoning gaps) while ignoring stylistic ones
  • Generates actionable playbooks as trigger/instruction/pitfall SOPs that teach the agent what to do differently

Read more about interactions & expert content →

Agent Success Evaluation

  • Session-level evaluation triggered automatically (10 min after last request)
  • Shadow comparison mode: A/B test regular vs shadow agent responses
  • Tool usage analysis for blocking issue detection

Read more about evaluation →

Search & Retrieval

  • Hybrid search (vector + full-text) across profiles and playbooks
  • LLM-powered query rewriting for improved recall
  • Unified search across all entity types in parallel
  • Fast at scale: unified search across ~3,000 indexed rows (profile + user playbook + agent playbook, ~1,000 rows each, queried in parallel) runs at ~57 ms p50 / ~73 ms p95 — measured service-layer with local SQLite on an Apple Silicon MacBook, 30 trials × 20 fixed queries. See the full benchmark report or reproduce with reflexio.benchmarks.retrieval_latency.

Multi-Provider LLM Support

  • OpenAI, Anthropic, Google Gemini, OpenRouter, Azure, MiniMax, and custom endpoints
  • Powered by LiteLLM — configure your preferred provider via API keys or custom endpoints

SDK Usage

For detailed API documentation, see the full API reference.

Install the client:

pip install reflexio-client

Basic usage

import reflexio

client = reflexio.ReflexioClient(
    url_endpoint="http://localhost:8081/"
)

# Publish interactions
await client.publish_interaction(
    request_id="req-001",
    user_id="user-123",
    interactions=[...],
    agent_version="v1",       # optional: track agent versions
    session_id="session-abc", # optional: group requests into sessions
)

# Search profiles
profiles = await client.search_profiles(
    reflexio.SearchUserProfileRequest(query="deployment region preference")
)

# Search agent playbooks
playbooks = await client.get_agent_playbooks(
    reflexio.GetAgentPlaybooksRequest(agent_version="v1")
)

Configuration

# Update org configuration
await client.set_config(reflexio.SetConfigRequest(
    config=reflexio.Config(
        api_key_config=reflexio.APIKeyConfig(openai="sk-..."),
        profile_extractor_configs=[...],
        playbook_configs=[reflexio.PlaybookConfig(...)],
    )
))

Architecture

Client (SDK / Web UI)
  → FastAPI Backend
    → Reflexio Orchestrator
      → GenerationService
        ├─ ProfileGenerationService  → Extractor(s) → Deduplicator → Storage
        ├─ PlaybookGenerationService → Extractor(s) → Deduplicator → Storage
        └─ GroupEvaluationScheduler  → Evaluator(s) → Storage (deferred 10 min)

See developer.md for project structure, supported LLM providers, and development setup.

Documentation

For comprehensive guides, examples, and API reference, visit the Reflexio Documentation.

Contributing

We welcome contributions! Please see developer.md for guidelines.

License

This project is currently licensed under Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reflexio_ai-0.2.9.tar.gz (417.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reflexio_ai-0.2.9-py3-none-any.whl (545.1 kB view details)

Uploaded Python 3

File details

Details for the file reflexio_ai-0.2.9.tar.gz.

File metadata

  • Download URL: reflexio_ai-0.2.9.tar.gz
  • Upload date:
  • Size: 417.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for reflexio_ai-0.2.9.tar.gz
Algorithm Hash digest
SHA256 5fcb5f7f9a9d002b66e8345ee31fed22a5c749cb5dc3b5608c70ddd7c9473432
MD5 fe351f6cad0dc12cb90ca12dc21f687f
BLAKE2b-256 79a345d44142a3545363803949a28ebca02a874d51933efd16fc71c4dfe66704

See more details on using hashes here.

File details

Details for the file reflexio_ai-0.2.9-py3-none-any.whl.

File metadata

  • Download URL: reflexio_ai-0.2.9-py3-none-any.whl
  • Upload date:
  • Size: 545.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for reflexio_ai-0.2.9-py3-none-any.whl
Algorithm Hash digest
SHA256 29f55fee6229610cc95c2597b0dea4494c300ce18e6f220712eb28886f26c9a4
MD5 d24bd83f808992a29db74a0c6a277b61
BLAKE2b-256 1922b643d8ba015e11d805c28b0ca437af1d04bf85be2f5bbcaa35cb4dc66722

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page