Skip to main content

CLI agent for building RAG pipelines

Project description

RAGOps Agent

PyPI version Python 3.12+ License: MIT

Optimal RAG in hours, not months.

A smart, LLM-powered CLI agent that automates the entire lifecycle of Retrieval-Augmented Generation (RAG) pipelines โ€” from creation and experimentation to deployment. Forget spending months tweaking chunking strategies, embeddings, and vector DBs by hand. Just describe what you need, and let the agent run 100+ parallel experiments to discover what actually works for your data โ€” fast, accurate, and infra-agnostic.

Built by Donkit AI โ€” Automated Context Engineering.

๐Ÿ“š Table of Contents

Who is this for?

  • AI Engineers building assistants and agents
  • Teams in need of accuracy-sensitive and multiagentic RAG where errors compound across steps
  • Organizations aiming to reduce time-to-value for production AI deployments

Key Features

  • Parallel Experimentation Engine โ€” Explores 100s of pipeline variations (chunking, vector DBs, prompts, rerankers, etc.) to find what performs best โ€” in hours, not months.
  • Docker Compose orchestration โ€” Automated deployment of RAG infrastructure (vector DB, RAG service)
  • Built-in Evaluation & Scoring โ€” Automatically generates evaluation dataset (if needed), runs Q&A tests and scores pipeline accuracy on your real data.
  • Multiple LLM providers โ€” Supports Vertex AI (Recommended), OpenAI, Anthropic Claude, Azure OpenAI, Ollama, OpenRouter

Main Capabilities

  • Interactive REPL โ€” Start an interactive session with readline history and autocompletion
  • Web UI โ€” Browser-based interface at http://localhost:8067 (donkit-ragops-web, auto-opens browser)
  • Docker Compose orchestration โ€” Automated deployment of RAG infrastructure (vector DB, RAG service)
  • Integrated MCP servers โ€” Built-in support for full RAG build pipeline (planning, reading, chunking, vector loading, querying, evaluation)
  • Checklist-driven workflow โ€” Each RAG project is structured as a checklist โ€” with clear stages, approvals, and progress tracking
  • Session-scoped checklists โ€” Only current session checklists appear in the UI
  • SaaS mode โ€” Connect to Donkit cloud for experiments
  • Enterprise mode โ€” deploy to VPC or on-premises with no vendor lock-in (reach out to us via https://donkit.ai)

Quick Install

The fastest way to install Donkit RAGOps. The installer automatically handles Python and dependencies.

macOS / Linux:

curl -sSL https://raw.githubusercontent.com/donkit-ai/ragops/main/scripts/install.sh | bash

Windows (PowerShell):

irm https://raw.githubusercontent.com/donkit-ai/ragops/main/scripts/install.ps1 | iex

After installation:

donkit-ragops        # Start CLI agent
donkit-ragops-web    # Start Web UI (browser opens automatically at http://localhost:8067)

Installation (Alternative Methods)

Option A: Using pipx (Recommended)

# Install pipx if you don't have it
pip install pipx
pipx ensurepath

# Install donkit-ragops
pipx install donkit-ragops

Option B: Using pip

pip install donkit-ragops

Option C: Using Poetry (for development)

# Create a new project directory
mkdir ~/ragops-workspace
cd ~/ragops-workspace

# Initialize Poetry project
poetry init --no-interaction --python="^3.12"

# Add donkit-ragops
poetry add donkit-ragops

# Activate the virtual environment
poetry shell

After activation, you can run the agent with:

donkit-ragops

Or run directly without activating the shell:

poetry run donkit-ragops

Quick Start

Prerequisites

  • Python 3.12+ installed
  • Docker Desktop installed and running (required for vector database)
    • Windows users: Docker Desktop with WSL2 backend is fully supported
  • API key for your chosen LLM provider (Vertex AI, OpenAI, or Anthropic)

Step 1: Install the package

pip install donkit-ragops

Step 2: Run the agent (first time)

donkit-ragops

On first run, an interactive setup wizard will guide you through configuration:

  1. Choose your LLM provider (Vertex AI, OpenAI, Anthropic, or Ollama)
  2. Enter API key or credentials path
  3. Optional: Configure log level
  4. Configuration is saved to .env file automatically

That's it! No manual .env creation needed - the wizard handles everything.

Reconfiguration

To reconfigure or change settings later:

# Run setup wizard to change configuration
donkit-ragops setup

The setup wizard allows you to:

Local Mode:

  • Choose LLM provider (Vertex AI, OpenAI, Anthropic, Ollama, OpenRouter, Donkit)
  • Configure API keys and credentials
  • Set optional parameters (models, base URLs, etc.)

SaaS Mode:

  • Login/logout with Donkit cloud
  • Manage integrations (OpenRouter API keys, etc.)
  • Configure cloud-based LLM providers

Step 3: Start using the agent (local mode)

Tell the agent what you want to build:

> Create a RAG pipeline for my documents in /Users/myname/Documents/work_docs

The agent will automatically:

  • โœ… Create a projects/<project_id>/ directory
  • โœ… Plan RAG configuration
  • โœ… Process and chunk your documents
  • โœ… Start Qdrant vector database (via Docker)
  • โœ… Load data into the vector store
  • โœ… Deploy RAG query service

What gets created

./
โ”œโ”€โ”€ .env                          # Your configuration (auto-created by wizard)
โ””โ”€โ”€ projects/
    โ””โ”€โ”€ my-project-abc123/        # Auto-created by agent
        โ”œโ”€โ”€ compose/              # Docker Compose files
        โ”‚   โ”œโ”€โ”€ docker-compose.yml
        โ”‚   โ””โ”€โ”€ .env
        โ”œโ”€โ”€ chunks/               # Processed document chunks
        โ””โ”€โ”€ rag_config.json       # RAG configuration

Interactive Mode (REPL)

# Start interactive session
donkit-ragops

# With specific provider
donkit-ragops -p vertexai

# With custom model
donkit-ragops -p openai -m gpt-5.2

# Start in SaaS/enterprise mode
donkit-ragops --enterprise

REPL Commands

Inside the interactive session, use these commands:

  • /help, /h, /? โ€” Show available commands
  • /exit, /quit, /q โ€” Exit the agent
  • /clear โ€” Clear conversation history and screen
  • /provider โ€” Switch LLM provider interactively
  • /model โ€” Switch LLM model interactively

Command-line Options

  • -p, --provider โ€” Override LLM provider from settings
  • -m, --model โ€” Specify model name
  • -s, --system โ€” Custom system prompt
  • --local โ€” Force local mode (default)
  • --enterprise โ€” Force enterprise mode (requires setup with donkit-ragops setup)
  • --show-checklist/--no-checklist โ€” Toggle checklist panel (default: shown)

Commands

# Setup wizard - configure Local or SaaS mode
donkit-ragops setup

# Health check
donkit-ragops ping

# Show current mode and authentication status
donkit-ragops status

# Auto-upgrade to latest version
donkit-ragops upgrade       # Check and upgrade (interactive)
donkit-ragops upgrade -y    # Upgrade without confirmation

Note: The upgrade command automatically detects your installation method (pip, pipx, or poetry) and runs the appropriate upgrade command.

Environment Variables

LLM Provider Configuration

  • RAGOPS_LLM_PROVIDER โ€” LLM provider name (e.g., openai, vertex, azure_openai, ollama, openrouter)
  • RAGOPS_LLM_MODEL โ€” Specify model name (e.g., gpt-4o-mini for OpenAI, gemini-2.5-flash for Vertex)

OpenAI / OpenRouter / Ollama

  • RAGOPS_OPENAI_API_KEY โ€” OpenAI API key (also used for OpenRouter and Ollama)
  • RAGOPS_OPENAI_BASE_URL โ€” OpenAI base URL (default: https://api.openai.com/v1)
    • OpenRouter: https://openrouter.ai/api/v1
    • Ollama: http://localhost:11434/v1
  • RAGOPS_OPENAI_EMBEDDINGS_MODEL โ€” Embedding model name (default: text-embedding-3-small)

Azure OpenAI

  • RAGOPS_AZURE_OPENAI_API_KEY โ€” Azure OpenAI API key
  • RAGOPS_AZURE_OPENAI_ENDPOINT โ€” Azure OpenAI endpoint URL
  • RAGOPS_AZURE_OPENAI_API_VERSION โ€” Azure API version (default: 2024-02-15-preview)
  • RAGOPS_AZURE_OPENAI_DEPLOYMENT โ€” Azure deployment name for chat model
  • RAGOPS_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT โ€” Azure deployment name for embeddings model

Vertex AI (Google Cloud)

  • RAGOPS_VERTEX_CREDENTIALS โ€” Path to Vertex AI service account JSON
  • RAGOPS_VERTEX_PROJECT โ€” Google Cloud project ID (optional, extracted from credentials if not set)
  • RAGOPS_VERTEX_LOCATION โ€” Vertex AI location (default: us-central1)

Logging

  • RAGOPS_LOG_LEVEL โ€” Logging level (default: ERROR)

Agent Workflow

The agent follows a structured workflow:

  1. Language Detection โ€” Detects user's language from first message
  2. Project Creation โ€” Creates project directory structure
  3. Checklist Creation โ€” Generates task checklist in user's language
  4. Step-by-Step Execution:
    • Asks for permission before each step
    • Marks item as in_progress
    • Executes the task using appropriate MCP tool
    • Reports results
    • Marks item as completed
  5. Deployment โ€” Sets up Docker Compose infrastructure
  6. Data Loading โ€” Loads documents into vector store

โฌ†๏ธ Back to top

Web UI

RAGOps includes a browser-based interface for easier interaction:

# Start Web UI server (browser opens automatically)
donkit-ragops-web

# Start Web UI without opening browser
donkit-ragops-web --no-browser

# Development mode with hot reload
donkit-ragops-web --dev

The browser will automatically open at http://localhost:8067. The Web UI provides:

  • Visual project management
  • File upload and attachment
  • Real-time agent responses
  • Checklist visualization
  • Settings configuration

SaaS Mode

SaaS mode is a fully managed cloud platform. All backend infrastructure โ€” databases, vector stores, RAG services, and experiment runners โ€” is hosted by Donkit. You get the same CLI interface, but with powerful cloud features.

Setup

# 1. Run setup wizard and choose SaaS mode
donkit-ragops setup

# The wizard will guide you through:
# - Login with your API token
# - Configure integrations (OpenRouter, etc.)
# - Manage credentials

# 2. Start in SaaS mode
donkit-ragops --enterprise

# 3. Check status
donkit-ragops status

Managing SaaS Configuration

Use donkit-ragops setup to:

  • Login/Logout โ€” Authenticate with Donkit cloud
  • Manage Integrations โ€” Add/update/remove API keys for:
    • OpenRouter (access 100+ models)
    • More providers coming soon

Your credentials are stored securely in system keyring and .env file.

What's Included

  • Managed infrastructure โ€” No Docker, no local setup. Everything runs in Donkit cloud
  • Automated experiments โ€” Run 100+ RAG architecture iterations to find optimal configuration
  • Experiment tracking โ€” Compare chunking strategies, embeddings, retrievers side-by-side
  • Evaluation pipelines โ€” Batch evaluation with precision/recall/accuracy metrics
  • File attachments โ€” Attach files using @/path/to/file syntax in chat
  • Persistent history โ€” Conversation and project history preserved across sessions
  • MCP over HTTP โ€” All MCP tools executed server-side

Enterprise Mode

Enterprise mode runs fully inside your infrastructure โ€” no data ever leaves your network. All components โ€” from vector databases to experiment runners โ€” are deployed within your VPC, Kubernetes cluster, or even a single secured server. You get the same CLI and web UI, but with full control over data, compute, and compliance. No vendor lock-in, no hidden dependencies โ€” just RAG automation, on your terms.

What's Included

  • Self-hosted infrastructure โ€” Run the full Donkit stack in your VPC, Kubernetes cluster, or air-gapped server
  • Automated experiments โ€” Execute 100+ RAG variations locally to identify the best-performing pipeline
  • Experiment tracking โ€” Monitor and compare pipeline variants (chunking, retrieval, reranking) within your environment
  • Evaluation pipelines โ€” Run secure, on-prem evaluation with precision, recall, and answer relevancy metrics
  • Local file attachments โ€” Add documents from using @/path/to/file in chat or connect your data sources via APIs
  • Session-based state โ€” Preserve project and conversation history within your private deployment
  • MCP over IPC โ€” All orchestration runs inside your infrastructure; no external HTTP calls required

โฌ†๏ธ Back to top

Modes of work comparison

Feature Local Mode SaaS Mode Enterprise Mode
Infrastructure Self-hosted (Docker) Managed by Donkit Managed by customer
Vector stores Local Qdrant/Milvus/Chroma Cloud-hosted Managed by customer
Experiments Manual Automated iterations Automated iterations
Evaluation Basic Full pipeline with metrics Full pipeline with metrics
Data persistence Local files Cloud database Full data residency control

MCP Servers

RAGOps Agent includes built-in MCP servers:

ragops-rag-planner

Plans RAG pipeline configuration based on requirements.

Tools:

  • plan_rag_config โ€” Generate RAG configuration from requirements

ragops-read-engine

Processes and converts documents from various formats.

Tools:

  • process_documents โ€” Convert PDF, DOCX, PPTX, XLSX, images to text/JSON/markdown/TOON

ragops-chunker

Chunks documents for vector storage.

Tools:

  • chunk_documents โ€” Split documents into chunks with configurable strategies
  • list_chunked_files โ€” List processed chunk files

ragops-vectorstore-loader

Loads chunks into vector databases and manages documents.

Tools:

  • vectorstore_load โ€” Load documents into Qdrant, Chroma, or Milvus (supports incremental loading)
  • delete_from_vectorstore โ€” Remove documents from vector store by filename or document_id

ragops-compose-manager

Manages Docker Compose infrastructure.

Tools:

  • init_project_compose โ€” Initialize Docker Compose for project
  • compose_up โ€” Start services
  • compose_down โ€” Stop services
  • compose_status โ€” Check service status
  • compose_logs โ€” View service logs

ragops-rag-query

Executes RAG queries against deployed services.

Tools:

  • search_documents โ€” Search for relevant documents in vector database
  • get_rag_prompt โ€” Get formatted RAG prompt with retrieved context

rag-evaluation

Evaluates RAG pipeline performance with batch processing.

Tools:

  • evaluate_batch โ€” Run batch evaluation from CSV/JSON, compute Precision/Recall/Accuracy

donkit-ragops-mcp

Unified MCP server that combines all servers above into a single endpoint.

# Run unified server
donkit-ragops-mcp

Claude Desktop configuration:

{
  "mcpServers": {
    "donkit-ragops-mcp": {
      "command": "donkit-ragops-mcp"
    }
  }
}

All tools are available with prefixes:

  • chunker_* โ€” Document chunking
  • compose_* โ€” Docker Compose orchestration
  • evaluation_* โ€” RAG evaluation
  • planner_* โ€” RAG configuration planning
  • query_* โ€” RAG query execution
  • reader_* โ€” Document reading/parsing
  • vectorstore_* โ€” Vector store operations

Note: Checklist management is handled by built-in agent tools, not MCP.

โฌ†๏ธ Back to top

Examples

Basic RAG Pipeline

donkit-ragops
> Create a RAG pipeline for customer support docs in ../docs folder

The agent will:

  1. Create project structure
  2. Plan RAG configuration
  3. Chunk documents from ../docs
  4. Set up Qdrant + RAG service
  5. Load data into vector store

Custom Configuration

donkit-ragops -p vertexai -m gemini-2.5-pro
> Build RAG for legal documents with 1000 token chunks and reranking

Multiple Projects

Each project gets its own:

  • Project directory (projects/<project_id>)
  • Docker Compose setup
  • Vector store collection
  • Configuration

โฌ†๏ธ Back to top

Development

Prerequisites

  • Python 3.12+
  • Poetry for dependency management
  • Docker Desktop (for testing vector stores and RAG services)

Setup

# Clone the repository
git clone https://github.com/donkit-ai/ragops.git
cd ragops/ragops-agent-cli

# Install dependencies
poetry install

# Activate virtual environment
poetry shell

Project Structure

ragops-agent/
โ”œโ”€โ”€ src/donkit_ragops/
โ”‚   โ”œโ”€โ”€ agent/              # LLM agent core and local tools
โ”‚   โ”‚   โ”œโ”€โ”€ agent.py        # Main LLMAgent class
โ”‚   โ”‚   โ”œโ”€โ”€ prompts.py      # System prompts for different providers
โ”‚   โ”‚   โ””โ”€โ”€ local_tools/    # Built-in agent tools
โ”‚   โ”œโ”€โ”€ llm/                # LLM provider integrations
โ”‚   โ”‚   โ””โ”€โ”€ providers/      # OpenAI, Vertex, Anthropic, etc.
โ”‚   โ”œโ”€โ”€ mcp/                # Model Context Protocol
โ”‚   โ”‚   โ”œโ”€โ”€ client.py       # MCP client implementation
โ”‚   โ”‚   โ””โ”€โ”€ servers/        # Built-in MCP servers
โ”‚   โ”œโ”€โ”€ repl/               # REPL implementation
โ”‚   โ”‚   โ”œโ”€โ”€ base.py         # Base REPL context
โ”‚   โ”‚   โ”œโ”€โ”€ local_repl.py   # Local mode REPL
โ”‚   โ”‚   โ””โ”€โ”€ enterprise_repl.py  # SaaS/Enterprise mode REPL
โ”‚   โ”œโ”€โ”€ web/                # Web UI (FastAPI + WebSocket)
โ”‚   โ”‚   โ”œโ”€โ”€ app.py          # FastAPI application
โ”‚   โ”‚   โ””โ”€โ”€ routes/         # API endpoints
โ”‚   โ”œโ”€โ”€ enterprise/         # SaaS/Enterprise mode components
โ”‚   โ”œโ”€โ”€ cli.py              # CLI entry point (Typer)
โ”‚   โ””โ”€โ”€ config.py           # Configuration management
โ”œโ”€โ”€ tests/                  # Test suite (170+ tests)
โ””โ”€โ”€ pyproject.toml          # Poetry project configuration

Running the CLI Locally

# Run CLI
poetry run donkit-ragops

# Run with specific provider
poetry run donkit-ragops -p openai -m gpt-4o

# Run Web UI
poetry run donkit-ragops-web

# Run unified MCP server
poetry run donkit-ragops-mcp

Running Tests

# Run all tests
poetry run pytest

# Run with coverage
poetry run pytest --cov=donkit_ragops

# Run specific test file
poetry run pytest tests/test_agent.py

# Run specific test
poetry run pytest tests/test_agent.py::test_function_name -v

Code Quality

# Format code (REQUIRED before commit)
poetry run ruff format .

# Lint and auto-fix (REQUIRED before commit)
poetry run ruff check . --fix

# Check without fixing
poetry run ruff check .

Version Management

IMPORTANT: Version must be incremented in pyproject.toml for every PR:

# Check current version
grep "^version" pyproject.toml

# Increment version in pyproject.toml before committing
# patch: 0.4.5 โ†’ 0.4.6 (bug fixes)
# minor: 0.4.5 โ†’ 0.5.0 (new features)
# major: 0.4.5 โ†’ 1.0.0 (breaking changes)

Adding a New MCP Server

Step 1. Create server file in src/donkit_ragops/mcp/servers/:

from fastmcp import FastMCP
from pydantic import BaseModel, Field

server = FastMCP("my-server")

class MyToolArgs(BaseModel):
    param: str = Field(description="Parameter description")

@server.tool(name="my_tool", description="What the tool does")
async def my_tool(args: MyToolArgs) -> str:
    # Implementation
    return "result"

def main() -> None:
    server.run(transport="stdio")

Step 2. Add entry point in pyproject.toml:

[tool.poetry.scripts]
ragops-my-server = "donkit_ragops.mcp.servers.my_server:main"

Step 3. Mount in unified server (donkit_ragops_mcp.py):

from .my_server import server as my_server
unified_server.mount(my_server, prefix="my")

Adding a New LLM Provider

  1. Create provider in src/donkit_ragops/llm/providers/
  2. Register in provider_factory.py
  3. Add configuration to config.py
  4. Update supported_models.py

Debugging

# Enable debug logging
RAGOPS_LOG_LEVEL=DEBUG poetry run donkit-ragops

# Debug MCP servers
RAGOPS_LOG_LEVEL=DEBUG poetry run donkit-ragops-mcp

โฌ†๏ธ Back to top

Docker Compose Services

The agent can deploy these services using profiles:

Qdrant (Vector Database)

services:
  qdrant:
    image: qdrant/qdrant:latest
    container_name: qdrant
    profiles: [qdrant, full-stack]
    ports:
      - "6333:6333"  # HTTP API
      - "6334:6334"  # gRPC API
    volumes:
      - qdrant_data:/qdrant/storage

Chroma (Vector Database)

services:
  chroma:
    image: chromadb/chroma:latest
    container_name: chroma
    profiles: [chroma]
    ports:
      - "8015:8000"
    volumes:
      - chroma_data:/chroma/data

Milvus (Vector Database)

Requires etcd and MinIO:

services:
  etcd:
    image: quay.io/coreos/etcd:v3.5.5
    container_name: milvus-etcd
    profiles: [milvus]

  minio:
    image: minio/minio:latest
    container_name: milvus-minio
    profiles: [milvus]

  milvus:
    image: milvusdb/milvus:v2.3.21
    container_name: milvus-standalone
    profiles: [milvus]
    ports:
      - "19530:19530"  # Milvus API
      - "9091:9091"    # Metrics
    depends_on:
      - etcd
      - minio

RAG Service

services:
  rag-service:
    image: donkitai/rag-service:latest
    container_name: rag-service
    profiles: [rag-service, full-stack]
    ports:
      - "8000:8000"
    env_file:
      - .env

Profiles:

  • qdrant - Qdrant vector database only
  • chroma - Chroma vector database only
  • milvus - Milvus vector database with dependencies
  • rag-service - RAG service only
  • full-stack - Qdrant + RAG service

โฌ†๏ธ Back to top

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  RAGOps Agent   โ”‚
โ”‚     (CLI)       โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
         โ”œโ”€โ”€ MCP Servers โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
         โ”‚   โ”œโ”€โ”€ ragops-rag-planner     โ”‚
         โ”‚   โ”œโ”€โ”€ ragops-chunker         โ”‚
         โ”‚   โ”œโ”€โ”€ ragops-vectorstore     โ”‚
         โ”‚   โ””โ”€โ”€ ragops-compose         โ”‚
         โ”‚                              โ”‚
         โ””โ”€โ”€ LLM Providers โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
             โ”œโ”€โ”€ Vertex AI              โ”‚
             โ”œโ”€โ”€ OpenAI                 โ”‚
             โ”œโ”€โ”€ Anthropic              โ”‚
             โ””โ”€โ”€ Ollama                 โ”‚
                                        โ”‚
                                        โ–ผ
                            โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                            โ”‚   Docker Compose        โ”‚
                            โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
                            โ”‚ Vector Databases:       โ”‚
                            โ”‚  โ€ข Qdrant (6333, 6334)  โ”‚
                            โ”‚  โ€ข Chroma (8015)        โ”‚
                            โ”‚  โ€ข Milvus (19530, 9091) โ”‚
                            โ”‚    + etcd               โ”‚
                            โ”‚    + MinIO              โ”‚
                            โ”‚                         โ”‚
                            โ”‚ RAG Service:            โ”‚
                            โ”‚  โ€ข rag-service (8000)   โ”‚
                            โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โฌ†๏ธ Back to top

Troubleshooting

Windows + Docker Desktop with WSL2

The agent fully supports Windows with Docker Desktop running in WSL2 mode. Path conversion and Docker communication are handled automatically.

Requirements:

  • Docker Desktop for Windows with WSL2 backend enabled
  • Python 3.12+ installed on Windows (not inside WSL2)
  • Run the agent from Windows PowerShell or Command Prompt

How it works:

  • The agent detects WSL2 Docker automatically
  • Windows paths like C:\Users\... are converted to /mnt/c/Users/... for Docker
  • No manual configuration needed

Troubleshooting:

# 1. Verify Docker is accessible from Windows
docker info

# 2. Check Docker reports Linux (indicates WSL2)
docker info --format "{{.OperatingSystem}}"
# Should output: Docker Desktop (or similar with "linux")

# 3. If Docker commands fail, ensure Docker Desktop is running

MCP Server Connection Issues

If MCP servers fail to start:

# Check MCP server logs
RAGOPS_LOG_LEVEL=DEBUG donkit-ragops

Vector Store Connection

Ensure Docker services are running:

cd projects/<project_id>
docker-compose ps
docker-compose logs qdrant

Credentials Issues

Verify your credentials:

# Vertex AI
gcloud auth application-default print-access-token

# OpenAI
echo $RAGOPS_OPENAI_API_KEY

โฌ†๏ธ Back to top

License

This project is licensed under the MIT License - see the LICENSE file for details.

Related Projects


Built with โค๏ธ by Donkit AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

donkit_ragops-0.5.7.tar.gz (287.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

donkit_ragops-0.5.7-py3-none-any.whl (328.3 kB view details)

Uploaded Python 3

File details

Details for the file donkit_ragops-0.5.7.tar.gz.

File metadata

  • Download URL: donkit_ragops-0.5.7.tar.gz
  • Upload date:
  • Size: 287.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.12 Linux/6.11.0-1018-azure

File hashes

Hashes for donkit_ragops-0.5.7.tar.gz
Algorithm Hash digest
SHA256 c6206509ad16868d505992ab77131400c75b0b36bcd3ddc0b5f4b47dd1411ed9
MD5 e54947a88e67a40328cad12fde38cb82
BLAKE2b-256 7f1978cf3b3a41c22f740a187d1ad86b0e804f4418cc46727452c3fb54cbc09b

See more details on using hashes here.

File details

Details for the file donkit_ragops-0.5.7-py3-none-any.whl.

File metadata

  • Download URL: donkit_ragops-0.5.7-py3-none-any.whl
  • Upload date:
  • Size: 328.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.12 Linux/6.11.0-1018-azure

File hashes

Hashes for donkit_ragops-0.5.7-py3-none-any.whl
Algorithm Hash digest
SHA256 106c841cbeb7caf508dc6987fc50600f1bfba5e02fda249f73ef7d1b09bc1161
MD5 9e6b243d237d93d7acd885874a5563e7
BLAKE2b-256 e8cefc29d49825177e317d018c45b5e4bc4a03979e75221306d380d5c3804baa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page