CLI agent for building RAG pipelines
Project description
RAGOps Agent
Optimal RAG in hours, not months.
A smart, LLM-powered CLI agent that automates the entire lifecycle of Retrieval-Augmented Generation (RAG) pipelines โ from creation and experimentation to deployment. Forget spending months tweaking chunking strategies, embeddings, and vector DBs by hand. Just describe what you need, and let the agent run 100+ parallel experiments to discover what actually works for your data โ fast, accurate, and infra-agnostic.
Built by Donkit AI โ Automated Context Engineering.
๐ Table of Contents
- ๐ฅ Who is this for?
- โจ Key Features
- ๐ฏ Main Capabilities
- โก Quick Install
- ๐ฆ Installation (Alternative Methods)
- ๐ Quick Start
- ๐ Agent Workflow
- ๐ Web UI
- โ๏ธ SaaS Mode
- ๐ข Enterprise Mode
- ๐ Modes of work comparison
- ๐ MCP Servers
- ๐ก Examples
- ๐ ๏ธ Development
- ๐ณ Docker Compose Services
- ๐๏ธ Architecture
- ๐ง Troubleshooting
- ๐ License
- ๐ Related Projects
Who is this for?
- AI Engineers building assistants and agents
- Teams in need of accuracy-sensitive and multiagentic RAG where errors compound across steps
- Organizations aiming to reduce time-to-value for production AI deployments
Key Features
- Parallel Experimentation Engine โ Explores 100s of pipeline variations (chunking, vector DBs, prompts, rerankers, etc.) to find what performs best โ in hours, not months.
- Docker Compose orchestration โ Automated deployment of RAG infrastructure (vector DB, RAG service)
- Built-in Evaluation & Scoring โ Automatically generates evaluation dataset (if needed), runs Q&A tests and scores pipeline accuracy on your real data.
- Multiple LLM providers โ Supports Vertex AI (Recommended), OpenAI, Anthropic Claude, Azure OpenAI, Ollama, OpenRouter
- Interactive Web UI โ Browser-based interface with real-time agent responses and visual project management
- Session-scoped Checklists โ Structured workflow with clear stages, approvals, and progress tracking
- Multi-mode Operation โ Local, SaaS, and Enterprise deployment options for any scale
Main Capabilities
- Interactive REPL โ Start an interactive session with readline history and autocompletion
- Web UI โ Browser-based interface at http://localhost:8067 (
donkit-ragops-web, auto-opens browser) - Docker Compose orchestration โ Automated deployment of RAG infrastructure (vector DB, RAG service)
- Integrated MCP servers โ Built-in support for full RAG build pipeline (planning, reading, chunking, vector loading, querying, evaluation)
- Checklist-driven workflow โ Each RAG project is structured as a checklist โ with clear stages, approvals, and progress tracking
- Session-scoped checklists โ Only current session checklists appear in the UI
- SaaS mode โ Connect to Donkit cloud for experiments
- Enterprise mode โ deploy to VPC or on-premises with no vendor lock-in (reach out to us via https://donkit.ai)
Quick Install
The fastest way to install Donkit RAGOps. The installer automatically handles Python and dependencies.
macOS / Linux:
curl -sSL https://raw.githubusercontent.com/donkit-ai/ragops/main/scripts/install.sh | bash
Windows (PowerShell):
irm https://raw.githubusercontent.com/donkit-ai/ragops/main/scripts/install.ps1 | iex
After installation:
donkit-ragops # Start CLI agent
donkit-ragops-web # Start Web UI (browser opens automatically at http://localhost:8067)
Installation (Alternative Methods)
Option A: Using pipx (Recommended)
# Install pipx if you don't have it
pip install pipx
pipx ensurepath
# Install donkit-ragops
pipx install donkit-ragops
Option B: Using pip
pip install donkit-ragops
Option C: Using Poetry (for development)
# Create a new project directory
mkdir ~/ragops-workspace
cd ~/ragops-workspace
# Initialize Poetry project
poetry init --no-interaction --python="^3.12"
# Add donkit-ragops
poetry add donkit-ragops
# Activate the virtual environment
poetry shell
After activation, you can run the agent with:
donkit-ragops
Or run directly without activating the shell:
poetry run donkit-ragops
Quick Start
Prerequisites
- Python 3.12+ installed
- Docker Desktop installed and running (required for vector database)
- Windows users: Docker Desktop with WSL2 backend is fully supported
- API key for your chosen LLM provider (Vertex AI, OpenAI, or Anthropic)
Step 1: Install the package
pip install donkit-ragops
Step 2: Run the agent (first time)
donkit-ragops
On first run, an interactive setup wizard will guide you through configuration:
- Choose your LLM provider (Vertex AI, OpenAI, Anthropic, or Ollama)
- Enter API key or credentials path
- Optional: Configure log level
- Configuration is saved to
.envfile automatically
That's it! No manual .env creation needed - the wizard handles everything.
Reconfiguration
To reconfigure or change settings later:
# Run setup wizard to change configuration
donkit-ragops setup
The setup wizard allows you to:
Local Mode:
- Choose LLM provider (Vertex AI, OpenAI, Anthropic, Ollama, OpenRouter, Donkit)
- Configure API keys and credentials
- Set optional parameters (models, base URLs, etc.)
SaaS Mode:
- Login/logout with Donkit cloud
- Manage integrations (OpenRouter API keys, etc.)
- Configure cloud-based LLM providers
Step 3: Start using the agent (local mode)
Tell the agent what you want to build:
> Create a RAG pipeline for my documents in /Users/myname/Documents/work_docs
The agent will automatically:
- โ
Create a
projects/<project_id>/directory - โ Plan RAG configuration
- โ Process and chunk your documents
- โ Start Qdrant vector database (via Docker)
- โ Load data into the vector store
- โ Deploy RAG query service
What gets created
./
โโโ .env # Your configuration (auto-created by wizard)
โโโ projects/
โโโ my-project-abc123/ # Auto-created by agent
โโโ compose/ # Docker Compose files
โ โโโ docker-compose.yml
โ โโโ .env
โโโ chunks/ # Processed document chunks
โโโ rag_config.json # RAG configuration
Interactive Mode (REPL)
# Start interactive session
donkit-ragops
# With specific provider
donkit-ragops -p vertexai
# With custom model
donkit-ragops -p openai -m gpt-5.2
# Start in SaaS/enterprise mode
donkit-ragops --enterprise
REPL Commands
Inside the interactive session, use these commands:
/help,/h,/?โ Show available commands/exit,/quit,/qโ Exit the agent/clearโ Clear conversation history and screen/providerโ Switch LLM provider interactively/modelโ Switch LLM model interactively
Command-line Options
-p, --providerโ Override LLM provider from settings-m, --modelโ Specify model name-s, --systemโ Custom system prompt--localโ Force local mode (default)--enterpriseโ Force enterprise mode (requires setup withdonkit-ragops setup)--show-checklist/--no-checklistโ Toggle checklist panel (default: shown)
Commands
# Setup wizard - configure Local or SaaS mode
donkit-ragops setup
# Health check
donkit-ragops ping
# Show current mode and authentication status
donkit-ragops status
# Auto-upgrade to latest version
donkit-ragops upgrade # Check and upgrade (interactive)
donkit-ragops upgrade -y # Upgrade without confirmation
Note: The
upgradecommand automatically detects your installation method (pip, pipx, or poetry) and runs the appropriate upgrade command.
Environment Variables
LLM Provider Configuration
RAGOPS_LLM_PROVIDERโ LLM provider name (e.g.,openai,vertex,azure_openai,ollama,openrouter)RAGOPS_LLM_MODELโ Specify model name (e.g.,gpt-4o-minifor OpenAI,gemini-2.5-flashfor Vertex)
OpenAI / OpenRouter / Ollama
RAGOPS_OPENAI_API_KEYโ OpenAI API key (also used for OpenRouter and Ollama)RAGOPS_OPENAI_BASE_URLโ OpenAI base URL (default: https://api.openai.com/v1)- OpenRouter:
https://openrouter.ai/api/v1 - Ollama:
http://localhost:11434/v1
- OpenRouter:
RAGOPS_OPENAI_EMBEDDINGS_MODELโ Embedding model name (default: text-embedding-3-small)
Azure OpenAI
RAGOPS_AZURE_OPENAI_API_KEYโ Azure OpenAI API keyRAGOPS_AZURE_OPENAI_ENDPOINTโ Azure OpenAI endpoint URLRAGOPS_AZURE_OPENAI_API_VERSIONโ Azure API version (default: 2024-02-15-preview)RAGOPS_AZURE_OPENAI_DEPLOYMENTโ Azure deployment name for chat modelRAGOPS_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENTโ Azure deployment name for embeddings model
Vertex AI (Google Cloud)
RAGOPS_VERTEX_CREDENTIALSโ Path to Vertex AI service account JSONRAGOPS_VERTEX_PROJECTโ Google Cloud project ID (optional, extracted from credentials if not set)RAGOPS_VERTEX_LOCATIONโ Vertex AI location (default: us-central1)
Logging
RAGOPS_LOG_LEVELโ Logging level (default: ERROR)
Agent Workflow
The agent follows a structured workflow:
- Language Detection โ Detects user's language from first message
- Project Creation โ Creates project directory structure
- Checklist Creation โ Generates task checklist in user's language
- Step-by-Step Execution:
- Asks for permission before each step
- Marks item as
in_progress - Executes the task using appropriate MCP tool
- Reports results
- Marks item as
completed
- Deployment โ Sets up Docker Compose infrastructure
- Data Loading โ Loads documents into vector store
Web UI
RAGOps includes a browser-based interface for easier interaction:
# Start Web UI server (browser opens automatically)
donkit-ragops-web
# Start Web UI without opening browser
donkit-ragops-web --no-browser
# Development mode with hot reload
donkit-ragops-web --dev
The browser will automatically open at http://localhost:8067. The Web UI provides:
- Visual project management
- File upload and attachment
- Real-time agent responses
- Checklist visualization
- Settings configuration
SaaS Mode
SaaS mode is a fully managed cloud platform. All backend infrastructure โ databases, vector stores, RAG services, and experiment runners โ is hosted by Donkit. You get the same CLI interface, but with powerful cloud features.
Setup
# 1. Run setup wizard and choose SaaS mode
donkit-ragops setup
# The wizard will guide you through:
# - Login with your API token
# - Configure integrations (OpenRouter, etc.)
# - Manage credentials
# 2. Start in SaaS mode
donkit-ragops --enterprise
# 3. Check status
donkit-ragops status
Managing SaaS Configuration
Use donkit-ragops setup to:
- Login/Logout โ Authenticate with Donkit cloud
- Manage Integrations โ Add/update/remove API keys for:
- OpenRouter (access 100+ models)
- More providers coming soon
Your credentials are stored securely in system keyring and .env file.
What's Included
- Managed infrastructure โ No Docker, no local setup. Everything runs in Donkit cloud
- Automated experiments โ Run 100+ RAG architecture iterations to find optimal configuration
- Experiment tracking โ Compare chunking strategies, embeddings, retrievers side-by-side
- Evaluation pipelines โ Batch evaluation with precision/recall/accuracy metrics
- File attachments โ Attach files using
@/path/to/filesyntax in chat - Persistent history โ Conversation and project history preserved across sessions
- MCP over HTTP โ All MCP tools executed server-side
Enterprise Mode
Enterprise mode runs fully inside your infrastructure โ no data ever leaves your network. All components โ from vector databases to experiment runners โ are deployed within your VPC, Kubernetes cluster, or even a single secured server. You get the same CLI and web UI, but with full control over data, compute, and compliance. No vendor lock-in, no hidden dependencies โ just RAG automation, on your terms.
What's Included
- Self-hosted infrastructure โ Run the full Donkit stack in your VPC, Kubernetes cluster, or air-gapped server
- Automated experiments โ Execute 100+ RAG variations locally to identify the best-performing pipeline
- Experiment tracking โ Monitor and compare pipeline variants (chunking, retrieval, reranking) within your environment
- Evaluation pipelines โ Run secure, on-prem evaluation with precision, recall, and answer relevancy metrics
- Local file attachments โ Add documents from using
@/path/to/filein chat or connect your data sources via APIs - Session-based state โ Preserve project and conversation history within your private deployment
- MCP over IPC โ All orchestration runs inside your infrastructure; no external HTTP calls required
Modes of work comparison
| Feature | Local Mode | SaaS Mode | Enterprise Mode |
|---|---|---|---|
| Infrastructure | Self-hosted (Docker) | Managed by Donkit | Managed by customer |
| Vector stores | Local Qdrant/Milvus/Chroma | Cloud-hosted | Managed by customer |
| Experiments | Manual | Automated iterations | Automated iterations |
| Evaluation | Basic | Full pipeline with metrics | Full pipeline with metrics |
| Data persistence | Local files | Cloud database | Full data residency control |
MCP Servers
RAGOps Agent includes built-in MCP servers:
ragops-rag-planner
Plans RAG pipeline configuration based on requirements.
Tools:
plan_rag_configโ Generate RAG configuration from requirements
ragops-read-engine
Processes and converts documents from various formats.
Tools:
process_documentsโ Convert PDF, DOCX, PPTX, XLSX, images to text/JSON/markdown/TOON
ragops-chunker
Chunks documents for vector storage.
Tools:
chunk_documentsโ Split documents into chunks with configurable strategieslist_chunked_filesโ List processed chunk files
ragops-vectorstore-loader
Loads chunks into vector databases and manages documents.
Tools:
vectorstore_loadโ Load documents into Qdrant, Chroma, or Milvus (supports incremental loading)delete_from_vectorstoreโ Remove documents from vector store by filename or document_id
ragops-compose-manager
Manages Docker Compose infrastructure.
Tools:
init_project_composeโ Initialize Docker Compose for projectcompose_upโ Start servicescompose_downโ Stop servicescompose_statusโ Check service statuscompose_logsโ View service logs
ragops-rag-query
Executes RAG queries against deployed services.
Tools:
search_documentsโ Search for relevant documents in vector databaseget_rag_promptโ Get formatted RAG prompt with retrieved context
rag-evaluation
Evaluates RAG pipeline performance with batch processing.
Tools:
evaluate_batchโ Run batch evaluation from CSV/JSON, compute Precision/Recall/Accuracy
donkit-ragops-mcp
Unified MCP server that combines all servers above into a single endpoint.
# Run unified server
donkit-ragops-mcp
Claude Desktop configuration:
{
"mcpServers": {
"donkit-ragops-mcp": {
"command": "donkit-ragops-mcp"
}
}
}
All tools are available with prefixes:
chunker_*โ Document chunkingcompose_*โ Docker Compose orchestrationevaluation_*โ RAG evaluationplanner_*โ RAG configuration planningquery_*โ RAG query executionreader_*โ Document reading/parsingvectorstore_*โ Vector store operations
Note: Checklist management is handled by built-in agent tools, not MCP.
Examples
Basic RAG Pipeline
donkit-ragops
> Create a RAG pipeline for customer support docs in ../docs folder
The agent will:
- Create project structure
- Plan RAG configuration
- Chunk documents from
../docs - Set up Qdrant + RAG service
- Load data into vector store
Custom Configuration
donkit-ragops -p vertexai -m gemini-2.5-pro
> Build RAG for legal documents with 1000 token chunks and reranking
Multiple Projects
Each project gets its own:
- Project directory (
projects/<project_id>) - Docker Compose setup
- Vector store collection
- Configuration
Development
Prerequisites
- Python 3.12+
- Poetry for dependency management
- Docker Desktop (for testing vector stores and RAG services)
Setup
# Clone the repository
git clone https://github.com/donkit-ai/ragops.git
cd ragops/ragops-agent-cli
# Install dependencies
poetry install
# Activate virtual environment
poetry shell
Project Structure
ragops-agent/
โโโ src/donkit_ragops/
โ โโโ agent/ # LLM agent core and local tools
โ โ โโโ agent.py # Main LLMAgent class
โ โ โโโ prompts.py # System prompts for different providers
โ โ โโโ local_tools/ # Built-in agent tools
โ โโโ llm/ # LLM provider integrations
โ โ โโโ providers/ # OpenAI, Vertex, Anthropic, etc.
โ โโโ mcp/ # Model Context Protocol
โ โ โโโ client.py # MCP client implementation
โ โ โโโ servers/ # Built-in MCP servers
โ โโโ repl/ # REPL implementation
โ โ โโโ base.py # Base REPL context
โ โ โโโ local_repl.py # Local mode REPL
โ โ โโโ enterprise_repl.py # SaaS/Enterprise mode REPL
โ โโโ web/ # Web UI (FastAPI + WebSocket)
โ โ โโโ app.py # FastAPI application
โ โ โโโ routes/ # API endpoints
โ โโโ enterprise/ # SaaS/Enterprise mode components
โ โโโ cli.py # CLI entry point (Typer)
โ โโโ config.py # Configuration management
โโโ tests/ # Test suite (170+ tests)
โโโ pyproject.toml # Poetry project configuration
Running the CLI Locally
# Run CLI
poetry run donkit-ragops
# Run with specific provider
poetry run donkit-ragops -p openai -m gpt-4o
# Run Web UI
poetry run donkit-ragops-web
# Run unified MCP server
poetry run donkit-ragops-mcp
Building the static frontend
The Web UI is served from src/donkit_ragops/web/static/, not from frontend/dist/. To see your frontend changes when running in production mode (without --dev):
-
From the project root, run the full build script (it builds Vite and copies output to
static/):./scripts/build-frontend.sh # macOS/Linux # or scripts/build-frontend.ps1 # Windows PowerShell
-
Restart
donkit-ragops-webif it is already running.
If you only run npm run build inside frontend/, the result goes to frontend/dist/ and the app will still serve the old files from static/. Use the script above so that the built files are copied into static/.
For live reload during development, use:
poetry run donkit-ragops-web --dev
Running Tests
# Run all tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=donkit_ragops
# Run specific test file
poetry run pytest tests/test_agent.py
# Run specific test
poetry run pytest tests/test_agent.py::test_function_name -v
Code Quality
# Format code (REQUIRED before commit)
poetry run ruff format .
# Lint and auto-fix (REQUIRED before commit)
poetry run ruff check . --fix
# Check without fixing
poetry run ruff check .
Version Management
IMPORTANT: Version must be incremented in pyproject.toml for every PR:
# Check current version
grep "^version" pyproject.toml
# Increment version in pyproject.toml before committing
# patch: 0.4.5 โ 0.4.6 (bug fixes)
# minor: 0.4.5 โ 0.5.0 (new features)
# major: 0.4.5 โ 1.0.0 (breaking changes)
Adding a New MCP Server
Step 1. Create server file in src/donkit_ragops/mcp/servers/:
from fastmcp import FastMCP
from pydantic import BaseModel, Field
server = FastMCP("my-server")
class MyToolArgs(BaseModel):
param: str = Field(description="Parameter description")
@server.tool(name="my_tool", description="What the tool does")
async def my_tool(args: MyToolArgs) -> str:
# Implementation
return "result"
def main() -> None:
server.run(transport="stdio")
Step 2. Add entry point in pyproject.toml:
[tool.poetry.scripts]
ragops-my-server = "donkit_ragops.mcp.servers.my_server:main"
Step 3. Mount in unified server (donkit_ragops_mcp.py):
from .my_server import server as my_server
unified_server.mount(my_server, prefix="my")
Adding a New LLM Provider
- Create provider in
src/donkit_ragops/llm/providers/ - Register in
provider_factory.py - Add configuration to
config.py - Update
supported_models.py
Debugging
# Enable debug logging
RAGOPS_LOG_LEVEL=DEBUG poetry run donkit-ragops
# Debug MCP servers
RAGOPS_LOG_LEVEL=DEBUG poetry run donkit-ragops-mcp
Docker Compose Services
The agent can deploy these services using profiles:
Qdrant (Vector Database)
services:
qdrant:
image: qdrant/qdrant:latest
container_name: qdrant
profiles: [qdrant, full-stack]
ports:
- "6333:6333" # HTTP API
- "6334:6334" # gRPC API
volumes:
- qdrant_data:/qdrant/storage
Chroma (Vector Database)
services:
chroma:
image: chromadb/chroma:latest
container_name: chroma
profiles: [chroma]
ports:
- "8015:8000"
volumes:
- chroma_data:/chroma/data
Milvus (Vector Database)
Requires etcd and MinIO:
services:
etcd:
image: quay.io/coreos/etcd:v3.5.5
container_name: milvus-etcd
profiles: [milvus]
minio:
image: minio/minio:latest
container_name: milvus-minio
profiles: [milvus]
milvus:
image: milvusdb/milvus:v2.3.21
container_name: milvus-standalone
profiles: [milvus]
ports:
- "19530:19530" # Milvus API
- "9091:9091" # Metrics
depends_on:
- etcd
- minio
RAG Service
services:
rag-service:
image: donkitai/rag-service:latest
container_name: rag-service
profiles: [rag-service, full-stack]
ports:
- "8000:8000"
env_file:
- .env
Profiles:
qdrant- Qdrant vector database onlychroma- Chroma vector database onlymilvus- Milvus vector database with dependenciesrag-service- RAG service onlyfull-stack- Qdrant + RAG service
Architecture
โโโโโโโโโโโโโโโโโโโ
โ RAGOps Agent โ
โ (CLI) โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โโโ MCP Servers โโโโโโโโโโโโโโโโ
โ โโโ ragops-rag-planner โ
โ โโโ ragops-chunker โ
โ โโโ ragops-vectorstore โ
โ โโโ ragops-compose โ
โ โ
โโโ LLM Providers โโโโโโโโโโโโโโค
โโโ Vertex AI โ
โโโ OpenAI โ
โโโ Anthropic โ
โโโ Ollama โ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Docker Compose โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Vector Databases: โ
โ โข Qdrant (6333, 6334) โ
โ โข Chroma (8015) โ
โ โข Milvus (19530, 9091) โ
โ + etcd โ
โ + MinIO โ
โ โ
โ RAG Service: โ
โ โข rag-service (8000) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
Troubleshooting
Windows + Docker Desktop with WSL2
The agent fully supports Windows with Docker Desktop running in WSL2 mode. Path conversion and Docker communication are handled automatically.
Requirements:
- Docker Desktop for Windows with WSL2 backend enabled
- Python 3.12+ installed on Windows (not inside WSL2)
- Run the agent from Windows PowerShell or Command Prompt
How it works:
- The agent detects WSL2 Docker automatically
- Windows paths like
C:\Users\...are converted to/mnt/c/Users/...for Docker - No manual configuration needed
Troubleshooting:
# 1. Verify Docker is accessible from Windows
docker info
# 2. Check Docker reports Linux (indicates WSL2)
docker info --format "{{.OperatingSystem}}"
# Should output: Docker Desktop (or similar with "linux")
# 3. If Docker commands fail, ensure Docker Desktop is running
MCP Server Connection Issues
If MCP servers fail to start:
# Check MCP server logs
RAGOPS_LOG_LEVEL=DEBUG donkit-ragops
Vector Store Connection
Ensure Docker services are running:
cd projects/<project_id>
docker-compose ps
docker-compose logs qdrant
Credentials Issues
Verify your credentials:
# Vertex AI
gcloud auth application-default print-access-token
# OpenAI
echo $RAGOPS_OPENAI_API_KEY
License
This project is licensed under the MIT License - see the LICENSE file for details.
Related Projects
- donkit-chunker โ Document chunking library
- donkit-vectorstore-loader โ Vector store loading utilities
- donkit-read-engine โ Document parsing engine
Built with โค๏ธ by Donkit AI
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file donkit_ragops-0.5.13.tar.gz.
File metadata
- Download URL: donkit_ragops-0.5.13.tar.gz
- Upload date:
- Size: 323.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.12.12 Linux/6.14.0-1017-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
31faff3e9b18b581cb07ae14ad09645600d4b1d0ede5df9c9e47996bb0635539
|
|
| MD5 |
06d91becc05f14488f38c75e1d719bb8
|
|
| BLAKE2b-256 |
4783481e80c0c6cbc376a113ae5cdc75a61ee1ba4ce0c8dbee3b4b5d1a828079
|
File details
Details for the file donkit_ragops-0.5.13-py3-none-any.whl.
File metadata
- Download URL: donkit_ragops-0.5.13-py3-none-any.whl
- Upload date:
- Size: 372.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.12.12 Linux/6.14.0-1017-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
81fabd6b57d7e1704dfb108692d4139785faafb8e614a468f942d0484188204a
|
|
| MD5 |
c675649d295c5710ce2a14f32dcaf167
|
|
| BLAKE2b-256 |
5003dfc5167c46b6779ff2c10c7ce9381782cce28c51284134106c70c6d2069a
|