Skip to main content

A flexible, modular framework for researching engineering design AI agents

Project description

design-research-agents

CI Coverage Examples Passing Public API In Examples Docs

[!IMPORTANT] Current monthly release: Lovelace Lift - May 2026
Due: May 1, 2026
Tracks: April 2026 work

design-research-agents is the agent-execution layer in the cmudrc design research ecosystem.

It provides typed, composable contracts for direct calls, multi-step runs, workflow orchestration, tool execution, and traceable experimentation.

Overview

This package centers on reproducible agent workflows with a compact public API:

  • Two primary entry points: DirectLLMCall and MultiStepAgent (direct, json, and code modes)
  • A seeded random control-condition agent for packaged-problem studies (SeededRandomBaselineAgent)
  • A prompt-driven workflow agent for packaged-problem studies (PromptWorkflowAgent)
  • Workflow primitives for model, tool, delegate, loop, and memory steps
  • A tool runtime built around Toolbox, with callable, script, and MCP-backed tool configs
  • Hosted and local LLM clients, plus ModelSelector for backend-selection policies
  • Prebuilt coordination and reasoning patterns for plan/execute, propose/critic, debate, routing, round-based coordination, blackboard, tree search, Ralph loops, nominal teams, RAG, and conversation
  • Tracing, structured ExecutionResult outputs, and runnable examples aimed at repeatable experiments

A Super Basic Agent

from design_research_agents import LlamaCppServerLLMClient, MultiStepAgent

with LlamaCppServerLLMClient() as llm_client:
    agent = MultiStepAgent(mode="direct", llm_client=llm_client, max_steps=3)
    result = agent.run(
        prompt="Suggest two design goals for a field-repairable drone battery latch.",
    )

print(result.final_output)

Quickstart

Requires Python 3.12+. Reproducible release installs target Python 3.12 (see .python-version).

If you prefer a guided editor-first flow, use the VS Code Setup Guide. It walks through creating a virtual environment, installing the published package, and running a first script in VS Code.

python -m venv .venv
source .venv/bin/activate
make dev
make test
PYTHONPATH=src python examples/agents/direct_llm_call.py

The base-install path uses OpenAICompatibleHTTPLLMClient and expects a running OpenAI-compatible endpoint. Contributor setup (make dev) installs development tooling only; backend runtimes are explicit extras.

For frozen installs, extras, and release maintenance, see Dependencies and Extras.

Examples

Start with examples/README.md for runnable examples grouped by agents, clients, workflows, patterns, model selection, and tools.

Docs

See the published documentation for quickstart guidance, backend setup, workflow/pattern guides, and API docs.

Build docs locally with:

make docs

Public API

The supported public surface is whatever is exported from design_research_agents.__all__.

Top-level exports include:

  • Agent entry points: DirectLLMCall, MultiStepAgent, SeededRandomBaselineAgent, PromptWorkflowAgent
  • Core contracts: ExecutionResult, LLMRequest, LLMMessage, LLMResponse, ToolResult
  • Workflow runtime: Workflow, CompiledExecution, and step contracts for model/tool/delegate/loop/memory behavior
  • Tools: Toolbox, CallableToolConfig, ScriptToolConfig, MCPServerConfig
  • Patterns: conversation, debate, plan/execute, propose/critic, Ralph loops, nominal teams, routing, round-based coordination, blackboard, tree search, and RAG
  • LLM clients: hosted and local adapters, including OpenAI-compatible HTTP plus provider-specific clients
  • Runtime services: ModelSelector and Tracer

Contributing

Contribution workflow and quality gates are documented in CONTRIBUTING.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

design_research_agents-0.3.0.tar.gz (506.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

design_research_agents-0.3.0-py3-none-any.whl (494.3 kB view details)

Uploaded Python 3

File details

Details for the file design_research_agents-0.3.0.tar.gz.

File metadata

  • Download URL: design_research_agents-0.3.0.tar.gz
  • Upload date:
  • Size: 506.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for design_research_agents-0.3.0.tar.gz
Algorithm Hash digest
SHA256 30c84678509ef6e300251f5ca5d6bc6a0196b94e97c0a53f13503286a8ee7ee0
MD5 29a01bd63d9390510bd467e46b15f1be
BLAKE2b-256 25c59ba1e3902708516fa21646226ea3cf1fb6c81f336be1ea67851936f95679

See more details on using hashes here.

Provenance

The following attestation bundles were made for design_research_agents-0.3.0.tar.gz:

Publisher: workflow.yml on cmudrc/design-research-agents

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file design_research_agents-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for design_research_agents-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b5fb2f9aec0b2ecb0283a29ea7c223af6598273a64926533a30c44fbd93291e1
MD5 ead591cdfc2ca03b0e3a9693532eed5b
BLAKE2b-256 12ebbf2b1d20a2277183f0daf5eb3ab9c0833ce35461804aa8b6dae49baec95c

See more details on using hashes here.

Provenance

The following attestation bundles were made for design_research_agents-0.3.0-py3-none-any.whl:

Publisher: workflow.yml on cmudrc/design-research-agents

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page