Skip to main content

An autonomous, LangGraph-powered AI development agency.

Project description

My Dev Team 🚀

An autonomous, LangGraph-powered AI development agency. My Dev Team takes raw project requirements and processes them through a multi-agent workflow (Product Manager, System Architect, Developers, and QA) to incrementally build, test, and deliver production-ready code.

Features

  • Multi-Agent Architecture: Specialized AI agents handle distinct phases of the software development lifecycle.
  • Semantic Model Routing: Automatically routes tasks to the most cost-effective or capable LLMs based on the task type (reasoning, coding, or fast-utility).
  • Strict Test-Driven Development (TDD): Testing is never an afterthought. Tasks are generated with embedded testing criteria, and the Developer writes unit tests alongside implementation code for immediate QA validation.
  • State Recovery & Resiliency: Powered by asynchronous SQLite checkpointing. If an API rate limit is hit or a workflow is interrupted, you can resume the exact thread without losing a single token of progress.
  • Telemetry & Cost Tracking: Automatically tallies prompt and completion tokens across the entire workflow. Calculates exact USD costs dynamically using LiteLLM's live pricing registry, printing a detailed receipt at the end of every run.
  • Incremental Development: The System Architect breaks down requirements into a manageable backlog of strictly formatted JSON tasks.
  • Self-Healing Code: The Developer, Reviewer, and QA Engineer agents continuously loop until unit tests pass and code meets specifications.
  • Structured Outputs: Powered by Pydantic and LangChain, ensuring zero "Markdown spillage" and robust state management.
  • Extensible: Easily add custom tools like HumanInTheLoop or WorkspaceSaver.

Installation

You can install the package directly via pip:

pip install my-dev-team

(For local development, clone the repository and run pip install -e .)

1. Preparing Your Project File

The crew requires a text file outlining your project requirements. By default, it looks for a specific header format to extract the project name and thread ID.

Create a file named project.txt:

Subject: NEW PROJECT: Web Scraper CLI

I need a Python command-line tool that scrapes articles from a given URL.
It should extract the title, author, and main body text, and save the output as a JSON file.

Requirements:
- Use BeautifulSoup4 for parsing.
- Include a `--url` argument and an `--output` argument.
- Write unit tests for the parsing logic.

2. Usage (CLI)

The fastest way to use the framework is via the terminal command included in the package.

devteam project.txt

Advanced CLI Options

You can easily switch between cloud providers and local models, and adjust rate limits based on your API tier:

# Run entirely locally for free using Ollama, with no rate limit!
devteam project.txt --provider ollama

# Run using OpenAI's flagship models, limited to 15 requests per minute
devteam project.txt --provider openai --rpm 15

# Resume an interrupted run exactly where it left off
dev-team --resume web_scraper_cli_20260312_083500

Available Arguments:

  • project_file: (Optional if resuming) Path to your project requirements text file.
  • --resume: Resume a specific thread ID (e.g., my_app_20260312_083500).
  • --provider: Choose the LLM backend. Options: groq, ollama (default), openai.
  • --rpm: API requests per minute. Set to 0 to disable rate limiting (default: 0).

Note: Ensure you have the corresponding API keys (e.g., GROQ_API_KEY, OPENAI_API_KEY) set in your .env file, or ensure your local Ollama instance is running.

3. Intelligent Model Routing (LLM Factory)

My Dev Team doesn't just use one model for everything. It uses an advanced Semantic Routing architecture via LLMFactory.

Instead of hardcoding a specific model (like gpt-5.3-codex), each agent requests a specific capability category and temperature. The Factory evaluates your chosen --provider and dynamically spins up the most cost-effective, capable model for that exact task.

The Categories

  • reasoning: For the System Architect and Product Manager. Maps to deep-thinking models.
  • code-generator: For the Senior Developer. Maps to strict, syntax-heavy models.
  • code-analyzer: For the QA and Reviewer agents. Maps to deep-context evaluation models.
  • fast-utility: For the Reporter. Maps to blazing-fast, ultra-cheap models for simple text summarization.

4. Usage (Python API)

If you want to integrate the crew into your own application, customize the LLM Factory's routing table, or override specific agent behaviors, use the clean Python API:

import asyncio
import aiosqlite
from pathlib import Path
from dotenv import load_dotenv

from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver
from devteam import VirtualCrew, ProjectManager, LLMFactory
from devteam.agents import ProductManager, SystemArchitect, SeniorDeveloper, CodeReviewer, QAEngineer, FinalQAEngineer, Reporter
from devteam.extensions import HumanInTheLoop, WorkspaceSaver
from devteam.utils import RateLimiter, TelemetryTracker

load_dotenv()

def build_crew(project_folder: Path, llm_factory: LLMFactory, checkpointer: AsyncSqliteSaver, rpm: int = 0) -> VirtualCrew:
    # Initialize agents using built-in prompt templates
    agents = {
        'pm': ProductManager.from_config('product-manager.md'),
        'architect': SystemArchitect.from_config('system-architect.md'),
        'developer': SeniorDeveloper.from_config('senior-developer.md'),
        'reviewer': CodeReviewer.from_config('code-reviewer.md'),
        'qa': QAEngineer.from_config('qa-engineer.md'),
        'final_qa': FinalQAEngineer.from_config('final-qa-engineer.md'),
        # Example: Forcing the reporter to use a more creative reasoning model
        'reporter': Reporter.from_config('reporter.md', model_category='reasoning', temperature=0.7)
    }
    # Add extensions like saving files to disk or requiring human approval
    extensions = [
        WorkspaceSaver(workspace_dir=workspace_dir),
        HumanInTheLoop()
    ]
    return VirtualCrew(
        manager=ProjectManager(),
        agents=agents,
        extensions=extensions,
        checkpointer=checkpointer,
        rate_limiter=RateLimiter(requests_per_minute=rpm) if rpm > 0 else None
    )

async def main():
    requirements = "Build a simple Python calculator CLI with basic arithmetic."
    workspace = Path('./workspaces/calculator_app')
    workspace.mkdir(parents=True, exist_ok=True)
    db_path = workspace / 'state.db'
    telemetry = TelemetryTracker()
    factory = LLMFactory(provider='groq', callbacks=[telemetry])
    try:
        async with aiosqlite.connect(db_path) as conn:
            checkpointer = AsyncSqliteSaver(conn)
            crew = build_crew(workspace, provider='groq', checkpointer=checkpointer, rpm=30)
            print("🚀 Starting the AI Dev Team...")
            final_state = await crew.execute(
                thread_id="calc_run_01",
                requirements=requirements
            )
        if final_state.abort_requested:
            print("❌ Workflow aborted by user or validation failure.")
        elif final_state.success:
            print("🎉 Project completed successfully!")
            print(f"Total Revisions: {final_state.total_revisions}")
            if final_state.final_report:
                print(final_state.final_report)
        else:
            print("🚨 Release failed: Integration bugs found!")
            for bug in final_state.integration_bugs:
                print(f" - {bug}")
    except KeyboardInterrupt:
        print("\n\n🛑 Workflow interrupted by user (Ctrl+C).")
        print(f"💡 You can resume this exact state later by running:")
        print(f"   dev-team --resume {thread_id}")

    finally:
        telemetry.print_receipt()

if __name__ == "__main__":
    asyncio.run(main())

AI Agents

  1. Product Manager: Analyzes requirements, asks clarifying questions, and writes detailed Technical Specifications.
  2. System Architect: Breaks specifications down into a cohesive backlog of developer tasks.
  3. Senior Developer: Incrementally writes code and unit tests for the current task.
  4. Code Reviewer: Analyzes the generated code for security, style, and logic issues.
  5. QA Engineer: Mentally simulates execution and evaluates the code against the task requirements.
  6. Final QA Engineer: Performs a full-repository integration test once all tasks are complete.
  7. Reporter: Generates a comprehensive final Markdown report for stakeholders.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

my_dev_team-0.2.0.tar.gz (38.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

my_dev_team-0.2.0-py3-none-any.whl (46.8 kB view details)

Uploaded Python 3

File details

Details for the file my_dev_team-0.2.0.tar.gz.

File metadata

  • Download URL: my_dev_team-0.2.0.tar.gz
  • Upload date:
  • Size: 38.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for my_dev_team-0.2.0.tar.gz
Algorithm Hash digest
SHA256 0efff774667ef06de90da9a959dd7d26fbf071152853a94efa06a87b970ac696
MD5 3c2e2206bc97e5d51714d6951e7e6a32
BLAKE2b-256 f191d754afffdbd8454cacecb44d1a3d972d3d24a88f37b8c136095b16fd1508

See more details on using hashes here.

File details

Details for the file my_dev_team-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: my_dev_team-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 46.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for my_dev_team-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3bc6e5986fe4ad2635461fc4fc44708eedefaddbd71e2e051b0108aa52661968
MD5 523eb77746e8fd4c6d037ea70de7173b
BLAKE2b-256 ffe47057f4ad66add504864a2f883d3a90f9eb33124cdc1bfcbdad9049d50a9c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page