Skip to main content

A small, simple, and structured AI agent.

Project description

StructAgent

A small, simple, and structured AI agent with reasoning capabilities and tool integration. Built with modern Python patterns and designed for extensibility.

Features

  • ReAct Agent: Think-Act-Observe loop with structured reasoning steps
  • Tool Integration: Modular tool system with pluggable capabilities
  • Multiple LLM Providers: Support for OpenRouter and other providers via Instructor
  • Mathematical Operations: Arithmetic and expression evaluation tools
  • Web Search: SearXNG integration for meta-search capabilities
  • Vector Indexing: LEANN integration for semantic search and RAG
  • Type Safety: Full Pydantic validation and structured outputs

Installation

Prerequisites

  • Python 3.12+
  • Poetry (for dependency management)
  • OpenRouter API key

Setup

  1. Install dependencies

    poetry install
    
  2. Configure environment

    cp .env.example .env
    # Edit .env with your API keys
    
  3. Set required environment variables

    OPENROUTER_API_KEY="your-api-key"
    OPENROUTER_MODEL_ID="qwen/qwen3-next-80b-a3b-thinking"
    OPENROUTER_BASE_URL="https://openrouter.ai/api/v1"
    ...
    

Configuration

Required Environment Variables

# OpenRouter (primary LLM provider)
OPENROUTER_API_KEY=your-openrouter-api-key
OPENROUTER_MODEL_ID=qwen/qwen3-next-80b-a3b-thinking
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1

# Optional: LM Studio (local models)
USE_LMSTUDIO=false
LMSTUDIO_MODEL_ID=qwen/qwen3-next-80b-a3b-thinking
LMSTUDIO_BASE_URL=http://localhost:1234/v1
LMSTUDIO_API_KEY=your-lmstudio-key

# Tools
SEARXNG_BASE_URL=http://localhost:8080
LEANN_INDEX_PATH=/path/to/your/leann_index
LEANN_CHAT_MODEL=qwen/qwen3-next-80b-a3b-thinking
LEANN_CHAT_BASE_URL=https://openrouter.ai/api/v1
LEANN_CHAT_API_KEY=your-openrouter-api-key

Supported Models

The agent supports various models through OpenRouter:

  • qwen/qwen3-next-80b-a3b-thinking (recommended for reasoning)
  • anthropic/claude-3.5-sonnet
  • openai/gpt-4o
  • And many more

It also supports whatever models you host locally with LM Studio.

  • 'ibm/ibm/granite-4-h-tiny'

Quick Start

Run With a Query

from struct_agent import build_client, run_react_loop

# Initialize client
# Client parameters are taken from .env
client = build_client()

# Run the agent with a query
query = "What is the weather in the capital of France, and what is that city known for?"

response = run_react_loop(query, client)

print(f"Answer: {response}")

Use a Custom Configuration

Possible parameters:

config = {
   "max_steps": int,        # The maximum ReAct loop steps allowed
   "tools": list[ToolSpec], # The tools available to the agent (By default, Searxng, Leann, and Maths tools are available)
   "verbose": bool,         # Print out ReAct steps if True
}

Usage example:

from struct_agent import build_client, run_react_loop, make_searxng_search_tool

client = build_client()

query = "Research recent advances in quantum computing"

# Custom configuration
config = {
   "max_steps": 15,
   "tools": [make_searxng_search_tool()],
   "verbose": True
}

response = run_react_loop(query, client, config)

Available Tools

MathsToolkit

  • Arithmetic operations
  • Expression evaluation
  • Mathematical calculations

MetaSearchToolkit

  • Web search via SearXNG
  • Meta-search capabilities
  • Information retrieval

VectorIndexToolkit

  • Semantic search via LEANN
  • Vector indexing
  • RAG capabilities

Adding a Custom Tool

First, you'll need to define the response model for your tool (to force the llm to reply with a correct structure).

from pydantic import BaseModel, Field

class CustomTool(BaseModel):
   """Inputs for the custom tool."""
   message: str = Field(default="", description="Some input message") # The input can be anything (int, list, dict, .etc).
   # list_of_things: list = Field(default=[], description="A list of things") # It can also have as many fields as you need
   # ...

Second, you'll need to define the factory function, that will return the ToolSpec with the argument info for the llm and the handler (actual functionality).

from struct_agent import ToolSpec

def make_custom_tool() -> ToolSpec:
    """Create a custom tool"""

    def handler(args: Dict[str, Any]) -> Dict[str, Any]:
        """Handle the custom tool call."""
        parsed_args = AddTextArgs(**args)
        message = parsed_args.message
        # Some functionality...
        return {"result": "function result"}

    return ToolSpec(
        name="custom_tool",
        description="A custom tool that does something.",
        args_model=CustomTool,
        handler=handler,
        parameters={
            "message": "Some input message"
        }
    )

Then, you'll need to simply add it to the config, which you pass to the agent. Note that when you override the config['tools'], the default tools are removed, so you'll have to add them manually.

from struct_agent import build_client, run_react_loop, create_toolspecs_from_toolkits, MathsToolkit, MetaSearchToolkit, VectorIndexToolkit

client = build_client()
query = "What is 2 + 2?"

# Add default tools
toolkits = [MathsToolkit, MetaSearchToolkit, VectorIndexToolkit]
all_tools = create_toolspecs_from_toolkits(toolkits) + [make_custom_tool()] # Add custom tool

# Add tools to the agent
config = {
   "tools": all_tools
}

# Run
response = run_react_loop(query, client, config)

Project Structure

src/struct_agent/
├── __init__.py                 # Main package exports
├── instructor_based/           # LLM client and agent logic
│   ├── __init__.py
│   ├── new_agent.py           # Main ReAct agent implementation
│   ├── client_manager.py      # LLM client management
│   ├── prompt_manager.py      # System prompts
│   ├── tool_manager.py        # Tool registry and execution
│   ├── reasoning_modules.py   # Reasoning data models
│   └── utils.py               # Utility functions
├── tools/                      # Modular tool system
│   ├── __init__.py
│   ├── maths_tools.py         # Mathematical operations
│   ├── searxng_tools.py       # Web search integration
│   ├── leann_tools.py         # Vector indexing
│   ├── blank_tool.py          # No-op tool
│   └── toolkits.py            # Pre-configured tool bundles
└── README.md                  # Package documentation

Core Components

ReAct Agent (new_agent.py)

Implements the Reasoning + Acting loop:

  • Think: Generate reasoning steps
  • Act: Select and execute tools
  • Observe: Process tool results
  • Repeat: Until final answer is reached

Client Manager (client_manager.py)

Handles LLM provider setup and model resolution through Instructor for structured outputs.

Tool System

  • Modular toolkits with standardized interfaces
  • Pydantic models for type-safe tool arguments
  • Easy extensibility for custom tools

Dependencies

  • instructor>=1.3: Structured outputs for LLM
  • openai>=1.30: OpenAI API client
  • pydantic>=2.7: Data validation and serialization
  • leann>=0.3.4,<0.4.0: Vector indexing and search

Troubleshooting

Common Issues

  1. ModuleNotFoundError: Ensure PYTHONPATH includes the src directory

    export PYTHONPATH=~/projects/struct_agent/src
    
  2. Missing API Keys: Verify .env file exists and contains required keys

    cp .env.example .env
    # Edit .env with your actual API keys
    
  3. SearXNG Connection: Ensure SearXNG is running on configured port

    SEARXNG_BASE_URL=http://localhost:8080
    # Update SEARXNG_BASE_URL if using different instance
    
  4. LEANN Index: Set correct absolute path to your LEANN index

    #.env
    LEANN_INDEX_PATH="/absolute/path/to/your/leann_index"
    

License

This project is open source and available under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

struct_agent-1.0.2.tar.gz (18.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

struct_agent-1.0.2-py3-none-any.whl (24.7 kB view details)

Uploaded Python 3

File details

Details for the file struct_agent-1.0.2.tar.gz.

File metadata

  • Download URL: struct_agent-1.0.2.tar.gz
  • Upload date:
  • Size: 18.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.0 CPython/3.11.13 Darwin/25.0.0

File hashes

Hashes for struct_agent-1.0.2.tar.gz
Algorithm Hash digest
SHA256 74508b9424c6fefb7ace5c59528f22efb94cc4d7523c67d259a43a13abcf1ada
MD5 af4b7fc64f97cbb91e425b43b205f475
BLAKE2b-256 d91e35f755197955e0f287ee339b9a079f27218e2d171fccdabe001e6af18f1f

See more details on using hashes here.

File details

Details for the file struct_agent-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: struct_agent-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 24.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.0 CPython/3.11.13 Darwin/25.0.0

File hashes

Hashes for struct_agent-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e509be8ef4d31cf1e8e7e74612031421e5b88413c99b4f5ea56b5369463711c0
MD5 8cb8a1a54a1a9ec90def7625f33610a7
BLAKE2b-256 58fdebd170a8213142175df30a1e075658ebe41a261dc65e5c44ef13c697891b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page