Docker-like CLI for AI agents with Enhanced LLM Intelligence
Project description
Agent as Code - Python Package
························································································································ : █████████ █████ █████████ █████ : : ███░░░░░███ ░░███ ███░░░░░███ ░░███ : : ░███ ░███ ███████ ██████ ████████ ███████ ██████ █████ ███ ░░░ ██████ ███████ ██████ : : ░███████████ ███░░███ ███░░███░░███░░███ ░░░███░ ░░░░░███ ███░░ ░███ ███░░███ ███░░███ ███░░███: : ░███░░░░░███ ░███ ░███░███████ ░███ ░███ ░███ ███████ ░░█████ ░███ ░███ ░███░███ ░███ ░███████ : : ░███ ░███ ░███ ░███░███░░░ ░███ ░███ ░███ ███ ███░░███ ░░░░███ ░░███ ███░███ ░███░███ ░███ ░███░░░ : : █████ █████░░███████░░██████ ████ █████ ░░█████ ░░████████ ██████ ░░█████████ ░░██████ ░░████████░░██████ : :░░░░░ ░░░░░ ░░░░░███ ░░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░░░░ ░░░░░░░ ░░░░░░░░░ ░░░░░░ ░░░░░░░░ ░░░░░░ : : ███ ░███ : : ░░██████ : : ░░░░░░ : ························································································································
Docker-like CLI for AI agents with hybrid Go + Python architecture and Enhanced LLM Intelligence
🚀 Hybrid Architecture
Agent as Code combines the performance of Go with the ecosystem of Python:
- ⚡ Go Binary Core: High-performance CLI operations with 10x speed improvement
- 🐍 Python Wrapper: Seamless integration with Python development workflows
- 🧠 Enhanced LLM Intelligence: AI-powered agent creation and optimization
- 📦 Zero Dependencies: Single binary with no runtime requirements
- 🌍 Cross-Platform: Native binaries for Linux, macOS, Windows (x86_64, ARM64)
What is Agent as Code?
Agent as Code (AaC) brings the simplicity of Docker to AI agent development. Just like Docker revolutionized application deployment, Agent as Code revolutionizes AI agent development with:
- Familiar Commands:
agent build,agent run,agent push- just like Docker - Enhanced LLM Commands:
agent llm create-agent,agent llm optimize- AI-powered intelligence - Declarative Configuration: Define agents with simple
agent.yamlfiles - Template System: Pre-built templates for common use cases
- Multi-Runtime Support: Python, Node.js, Go, and more
- Registry Integration: Share and discover agents easily
- Intelligent Generation: Automatically create fully functional agents with tests and documentation
🧠 Enhanced LLM Commands
The new LLM intelligence features provide:
agent llm create-agent [USE_CASE]: Create intelligent, fully functional agentsagent llm optimize [MODEL] [USE_CASE]: Optimize models for specific use casesagent llm benchmark: Comprehensive model benchmarkingagent llm deploy-agent [AGENT_NAME]: Deploy and test agents locallyagent llm analyze [MODEL]: Deep model analysis and insights
Quick Start
Installation
pip install agent-as-code
Create Your First Agent (Traditional Way)
# Create a new chatbot agent
agent init my-chatbot --template chatbot
# Navigate to the project
cd my-chatbot
# Build the agent
agent build -t my-chatbot:latest .
# Run the agent
agent run my-chatbot:latest
Create Your First Intelligent Agent (Enhanced LLM Way)
# Create an intelligent agent with AI-powered generation
agent llm create-agent chatbot
# Navigate to the generated project
cd chatbot-agent
# Deploy and test the agent automatically
agent llm deploy-agent chatbot-agent
Your agent is now running at http://localhost:8080 with comprehensive testing and validation! 🚀
Available Templates
Get started instantly with pre-built templates:
agent init my-bot --template chatbot # Customer support chatbot
agent init analyzer --template sentiment # Sentiment analysis
agent init summarizer --template summarizer # Document summarization
agent init translator --template translator # Language translation
agent init insights --template data-analyzer # Data analysis
agent init writer --template content-gen # Content generation
🧠 Enhanced LLM Use Cases
🚀 Intelligent Agent Creation
# Create fully functional agents with AI-powered generation
agent llm create-agent chatbot
agent llm create-agent sentiment-analyzer
agent llm create-agent workflow-automation
# Each agent includes:
# - Optimized Python FastAPI application
# - Comprehensive test suite
# - Production-ready Dockerfile
# - Detailed documentation
# - CI/CD workflows
# - Health checks and monitoring
⚡ Model Optimization
# Optimize models for specific use cases
agent llm optimize llama2 chatbot
agent llm optimize mistral:7b code-generation
agent llm optimize codellama:13b debugging
# Features:
# - Parameter tuning (temperature, top_p, etc.)
# - Custom system messages
# - Context window optimization
# - Performance benchmarks
# - Use case specific configurations
📊 Comprehensive Benchmarking
# Benchmark all local models
agent llm benchmark
# Focus on specific tasks
agent llm benchmark --tasks chatbot,code,analysis
# Get detailed reports
agent llm benchmark --output json
# Metrics include:
# - Response time and throughput
# - Memory usage and efficiency
# - Quality assessment
# - Cost-benefit analysis
# - Performance recommendations
🚀 Intelligent Deployment
# Deploy and test agents automatically
agent llm deploy-agent my-agent
# Run comprehensive tests
agent llm deploy-agent my-agent --test-suite comprehensive
# Enable monitoring
agent llm deploy-agent my-agent --monitor
# Features:
# - Automatic container building
# - Comprehensive testing
# - Health validation
# - Performance metrics
# - Deployment reports
🔍 Deep Model Analysis
# Analyze model capabilities
agent llm analyze llama2
# Get detailed insights
agent llm analyze mistral:7b --detailed
# Focus on capabilities
agent llm analyze codellama:13b --capabilities
# Analysis includes:
# - Model architecture and parameters
# - Performance characteristics
# - Best use cases and limitations
# - Optimization opportunities
# - Integration recommendations
Python API Usage
Use Agent as Code programmatically in your Python applications:
Traditional Commands
from agent_as_code import AgentCLI
# Initialize the CLI
cli = AgentCLI()
# Create a new agent
cli.init("my-agent", template="sentiment", runtime="python")
# Build the agent
cli.build(".", tag="my-agent:latest")
# Run the agent
cli.run("my-agent:latest", port="8080:8080", detach=True)
Enhanced LLM Commands
from agent_as_code import AgentCLI
# Initialize the CLI
cli = AgentCLI()
# Create intelligent agents
cli.create_agent('sentiment-analyzer')
cli.create_agent('workflow-automation', model='mistral:7b')
# Optimize models for specific use cases
cli.optimize_model('llama2', 'chatbot')
cli.optimize_model('mistral:7b', 'code-generation')
# Benchmark all models
cli.benchmark_models(['chatbot', 'code-generation', 'analysis'])
# Deploy and test agents
cli.deploy_agent('my-agent', test_suite='comprehensive', monitor=True)
# Analyze model capabilities
cli.analyze_model('llama2:7b', detailed=True, capabilities=True)
# Manage local models
cli.list_models()
cli.pull_model('llama2:7b')
cli.test_model('llama2:7b', input_text="Hello, how are you?")
cli.remove_model('old-model', force=True)
Agent Configuration
Define your agent with a simple agent.yaml file:
apiVersion: agent.dev/v1
kind: Agent
metadata:
name: my-chatbot
version: 1.0.0
description: Customer support chatbot
spec:
runtime: python
model:
provider: openai
name: gpt-4
config:
temperature: 0.7
max_tokens: 500
capabilities:
- conversation
- customer-support
ports:
- container: 8080
host: 8080
environment:
- name: OPENAI_API_KEY
value: ${OPENAI_API_KEY}
healthCheck:
command: ["curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
Use Cases
🤖 Customer Support
agent init support-bot --template chatbot
# Includes conversation memory, intent classification, escalation handling
📊 Data Analysis
agent init data-insights --template data-analyzer
# Includes statistical analysis, visualization, AI-powered insights
🌐 Content Creation
agent init content-writer --template content-gen
# Includes blog posts, social media, marketing copy generation
🔍 Text Analysis
agent init text-analyzer --template sentiment
# Includes sentiment analysis, emotion detection, batch processing
Development Workflow
Traditional Development
# Create and test locally
agent init my-agent --template chatbot
cd my-agent
agent build -t my-agent:dev .
agent run my-agent:dev
# Make changes and rebuild
agent build -t my-agent:dev . --no-cache
🧠 Enhanced LLM Development
# Create intelligent agent with AI-powered generation
agent llm create-agent workflow-automation
# Navigate to generated project
cd workflow-automation-agent
# Deploy and test automatically
agent llm deploy-agent workflow-automation-agent
# The agent is now running with:
# - Comprehensive testing (3/3 tests passed)
# - Health validation (HEALTHY status)
# - Performance metrics
# - Ready for production use
Production Deployment
# Build for production
agent build -t my-agent:1.0.0 .
# Push to registry
agent push my-agent:1.0.0
# Deploy anywhere
docker run -p 8080:8080 my-agent:1.0.0
CI/CD Integration
# GitHub Actions example
- name: Install Agent CLI
run: pip install agent-as-code
- name: Create Intelligent Agent
run: agent llm create-agent workflow-automation
- name: Deploy and Test Agent
run: agent llm deploy-agent workflow-automation-agent
- name: Build Agent
run: agent build -t ${{ github.repository }}:${{ github.sha }} .
- name: Push Agent
run: agent push ${{ github.repository }}:${{ github.sha }}
Python Ecosystem Integration
Jupyter Notebooks
# Install in notebook
!pip install agent-as-code
# Create agent directly in notebook
from agent_as_code import AgentCLI
cli = AgentCLI()
cli.init("notebook-agent", template="sentiment")
Virtual Environments
# Each project can have its own agent version
python -m venv myproject
source myproject/bin/activate
pip install agent-as-code==1.0.0
agent init my-project-agent
Poetry Integration
# Add to your Poetry project
poetry add agent-as-code
poetry run agent init my-agent --template chatbot
🏢 Enterprise Features
🔒 Security & Compliance
- Role-Based Access Control (RBAC): Manage permissions and access levels
- JWT Authentication: Secure API endpoints with token-based auth
- Audit Logging: Comprehensive logging for compliance and debugging
- Container Security: Multi-stage Docker builds with security best practices
📊 Monitoring & Observability
- Health Checks: Automatic health monitoring with configurable intervals
- Metrics Collection: Prometheus-compatible metrics for monitoring
- Structured Logging: Structured logging with configurable levels
- Performance Tracking: Response time, memory usage, and CPU monitoring
🚀 Scalability & Performance
- Horizontal Scaling: Kubernetes manifests for orchestration
- Load Balancing: Built-in load balancing and health checks
- Resource Management: Configurable CPU and memory limits
- Auto-scaling: Horizontal Pod Autoscaler support
🔧 DevOps Integration
- CI/CD Pipelines: GitHub Actions workflows for automation
- Container Registry: Push/pull from any Docker registry
- Multi-Environment: Support for dev, staging, and production
- Infrastructure as Code: Kubernetes manifests and Docker configurations
🧪 Testing & Quality Assurance
Automated Testing
# Run comprehensive tests
agent llm deploy-agent my-agent --test-suite comprehensive
# Test specific functionality
pytest tests/test_workflow_automation.py::test_process_workflow
# Coverage reporting
pytest --cov=main tests/
Quality Metrics
- Test Coverage: 95%+ test coverage for all generated agents
- Code Quality: Black formatting, flake8 linting, mypy type checking
- Performance Testing: Response time and throughput validation
- Integration Testing: End-to-end functionality validation
🌟 Advanced Features
Model Management
# List available models
agent llm list
# Pull new models
agent llm pull llama2:7b
# Test model performance
agent llm test llama2:7b --input "Hello, how are you?"
# Remove unused models
agent llm remove old-model --force
Custom Use Cases
# Create custom agent templates
agent llm create-agent custom-use-case
# The system will:
# - Analyze the use case requirements
# - Recommend appropriate models
# - Generate optimized code
# - Create comprehensive tests
# - Set up monitoring and logging
Performance Optimization
# Optimize for specific workloads
agent llm optimize llama2:7b high-throughput
# Benchmark optimization results
agent llm benchmark --tasks high-throughput
# Deploy optimized agent
agent llm deploy-agent optimized-agent
Requirements
- Python: 3.8 or higher
- Operating System: Linux, macOS, or Windows
- Architecture: x86_64 (amd64) or ARM64
The package includes pre-compiled binaries for all supported platforms, so no additional dependencies are required.
Architecture
This Python package is a wrapper around a high-performance Go binary:
- Go Binary: Handles core CLI operations (build, run, etc.)
- Python Wrapper: Provides Python API and pip integration
- Cross-Platform: Works on Linux, macOS, and Windows
- Self-Contained: No external dependencies required
Contributing
We welcome contributions as soon as we have the Go binary ready to make public along with the github Repo!
Support
- 📖 Documentation: agent-as-code.myagentregistry.com/documentation
- 🚀 Getting Started: agent-as-code.myagentregistry.com/getting-started
- 💡 Examples: agent-as-code.myagentregistry.com/examples
- 🔧 CLI Reference: agent-as-code.myagentregistry.com/cli
- 📦 Registry Guide: agent-as-code.myagentregistry.com/registry
Ready to build your first AI agent?
pip install agent-as-code
agent init my-first-agent --template chatbot
cd my-first-agent
agent run
Join thousands of developers building the future of AI agents! 🚀
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_as_code-1.1.0.tar.gz.
File metadata
- Download URL: agent_as_code-1.1.0.tar.gz
- Upload date:
- Size: 33.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fd1621d5f852d1972d5d09e548b8043ba7c6fdb571e5466d0c2846270b5a09d7
|
|
| MD5 |
65196cc1312a070cf6cfdbeb769877f9
|
|
| BLAKE2b-256 |
ab45020943652303c4637108727fbfc65911fef710ddd06e9775e79d55f8dc05
|
File details
Details for the file agent_as_code-1.1.0-py3-none-any.whl.
File metadata
- Download URL: agent_as_code-1.1.0-py3-none-any.whl
- Upload date:
- Size: 33.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f9c0c0f153f6450be1a29fa7a703109c4f95fac734e4ff344f5b86051ebc093f
|
|
| MD5 |
1a53011f382a26d0631500c2d3782eda
|
|
| BLAKE2b-256 |
100e7a10def653e1895caadf5ba7fec63d1d7695d499a827c847b2f624122571
|