Skip to main content

AI-powered DevOps CLI assistant

Project description

DevOps Agent

An AI-powered CLI tool to assist with DevOps troubleshooting, Applications with Kubernetes architecture, log analysis, and infrastructure code generation.

Features

  • 📊 Log Analysis: Analyze log files and get actionable insights
  • 💬 Query Interface: Ask questions about DevOps best practices, Terraform, Kubernetes, etc.
  • 🛠️ Template Generation: Generate infrastructure code templates
  • 🤖 AI-Powered: Leverages multiple LLM providers (OpenAI, Anthropic, Gemini, Ollama, vLLM)
  • 🎯 Flexible Provider Selection: Choose your preferred LLM provider and model dynamically
  • 🔒 Self-Hosted Options: Run privately with Ollama or vLLM
  • 🧠 Reasoning Mode: Enable advanced reasoning capabilities for complex queries
  • 🐛 Debug Mode: Troubleshoot agent behavior with detailed logging
  • 💾 Memory Management: Persistent context using Qdrant vector database
  • 🎨 Interactive Mode: Engage in continuous conversations with the agent
  • 📝 Multiple Output Formats: Export results as text, JSON, or Markdown

Installation

# Clone the repository
git clone https://github.com/yourusername/devops-agent.git
cd devops-agent

# Install in development mode
pip install -e .

# Or install from PyPI (when published)
pip install devops-agent

Configuration

LLM API KEYS

# For OpenAI
export OPENAI_API_KEY=YOUR_API_KEY

# For Anthropic Claude
export ANTHROPIC_API_KEY=YOUR_API_KEY

# For Google Gemini
export GEMINI_API_KEY=YOUR_API_KEY

# For Ollama (self-hosted, typically no API key needed)
export OLLAMA_API_KEY=YOUR_API_KEY  # Optional

# For vLLM (self-hosted)
export VLLM_API_KEY=YOUR_API_KEY

Qdrant Config for Agent Memory

(If not configured fall backs to in-memory vector store)

export QDRANT_URL=YOUR QDRANT URL
export QDRANT_API_KEY=YOUR QDRANT API KEY

Usage

Ask Questions

devops-agent run --query "I need terraform script to spin up Azure blob storage"
devops-agent run --query "How to increase my pod memory and CPU in k8s"

Interactive Mode

devops-agent run --interactive
# or
devops-agent run -i

Advanced Options

Choose Your LLM Provider and Model

# Use OpenAI with a specific model
devops-agent run --provider openai --model gpt-4o --query "your question"

# Use Anthropic Claude
devops-agent run --provider anthropic --model claude-sonnet-4-20250514 --query "your question"

# Use Google Gemini
devops-agent run --provider google --model gemini-2.0-flash-exp --query "your question"

# Use Ollama (self-hosted)
devops-agent run --provider ollama --model llama3 --query "your question"

# Use vLLM (self-hosted)
devops-agent run --provider vllm --model your-model-name --query "your question"

Enable Debug Mode

devops-agent run --query "your question" --debug_mode true

Enable Reasoning Mode

devops-agent run --query "your question" --reasoning_enabled true

Combine Multiple Options

# Interactive mode with specific provider, model, and reasoning
devops-agent run -i --provider anthropic --model claude-sonnet-4-20250514 --reasoning_enabled true

# Query with debug mode and custom output
devops-agent run --query "docker setup for microservices" --provider openai --model gpt-4o --debug_mode true --output result.md --format markdown

CLI Options Reference

devops-agent run Options

Option Type Description
--log-file Path Path to log file to analyze
--provider String LLM provider (openai, anthropic, google, ollama, vllm)
--model String Model name (e.g., gpt-4o, claude-sonnet-4-20250514, gemini-2.0-flash-exp)
--query String Query to ask the DevOps agent
--output Path Output file path for saving results
--format Choice Output format: text, json, or markdown (default: text)
--interactive, -i Flag Run in interactive mode for continuous conversation
--debug_mode Boolean Enable debug mode with detailed logging
--reasoning_enabled Boolean Enable reasoning mode for complex problem-solving

Provider-Specific Model Examples

OpenAI:

  • gpt-4o
  • gpt-5-mini
  • gpt-5.1

Anthropic:

  • claude-sonnet-4-20250514
  • claude-sonnet-4-5-20250929
  • claude-3-5-sonnet-20241022

Google:

  • gemini-3-pro
  • gemini-2.5-pro
  • gemini-2.5-flash

Ollama (Self-hosted):

  • granite4:3b
  • qwen3:8b
  • cogito:latest
  • Any model you have pulled locally

vLLM (Self-hosted):

  • Any model served by your vLLM instance

Development

# Install development dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Format code
black devops_agent/
isort devops_agent/

# Lint
flake8 devops_agent/

Project Structure

devops-agent/
├── devops_agent/          # Main package
│   ├── cli.py            # CLI interface
│   ├── core/             # Core functionality
│   ├── templates/        # Template generators
│   ├── utils/            # Utilities
│   └── prompts/          # LLM prompts
└── docs/                 # Documentation

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

Apache2.0 License - see LICENSE file for details

RoadMap

  • Implement log analysis with pattern detection
  • Add Support for MCP to use local file system for quick access
  • Add support for Human-in-the-Loop for more focused and collaborated work
  • Support for custom prompt templates
  • Agent as a Service with privacy first concept

Support

For issues and questions, please open an issue on GitHub.

Special Credits

  • Built with Agno2.0 framework for multi-agent orchestration
  • Uses POML for structured prompt engineering
  • Uses Qdrant for memory management
  • powered by Claude (Anthropic), GPT (OpenAI) and Gemini (Google)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

devops_agent-0.0.5.tar.gz (46.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

devops_agent-0.0.5-py3-none-any.whl (49.0 kB view details)

Uploaded Python 3

File details

Details for the file devops_agent-0.0.5.tar.gz.

File metadata

  • Download URL: devops_agent-0.0.5.tar.gz
  • Upload date:
  • Size: 46.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for devops_agent-0.0.5.tar.gz
Algorithm Hash digest
SHA256 9ba4a3d7a8e9cab18582f2395b170430b52dcadcd50ea5f5b97850d48678e3de
MD5 48bda91025f78b268dc61024081eba61
BLAKE2b-256 88500c4b0934df6aedc444e68e29e287f0fd6810b702746d12d3b12946751cfe

See more details on using hashes here.

File details

Details for the file devops_agent-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: devops_agent-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 49.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for devops_agent-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 5bb7127cf659834f9eb5f2708562a2f44609c54fc39afef70c1213f4d3486885
MD5 45780867a358b903df063fde401eac05
BLAKE2b-256 1bd0cf5c177db41caecc1e94ecc0f0679793f0f217f9389aa4a897da64914caa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page