Skip to main content

Agentic AI out of the box

Project description

Cogency (Python)

Multi-step reasoning agents with clean architecture

Installation

pip install cogency

Quick Start

from cogency.agent import Agent
from cogency.llm import GeminiLLM
from cogency.tools import CalculatorTool, WebSearchTool, FileManagerTool

# Create agent with multiple tools
llm = GeminiLLM(api_key="your-key")
agent = Agent(
    name="MyAgent", 
    llm=llm, 
    tools=[
        CalculatorTool(), 
        WebSearchTool(), 
        FileManagerTool()
    ]
)

# Execute with tracing
result = agent.run("What is 15 * 23?", enable_trace=True, print_trace=True)
print(result["response"])

Core Architecture

Cogency uses a clean 5-step reasoning loop:

  1. Plan - Decide strategy and if tools are needed
  2. Reason - Select tools and prepare inputs
  3. Act - Execute tools with validation
  4. Reflect - Evaluate results and decide next steps
  5. Respond - Format clean answer for user

This separation enables emergent reasoning behavior - agents adapt their tool usage based on results without explicit programming.

Built-in Tools

  • CalculatorTool - Basic arithmetic operations
  • WebSearchTool - Web search using DuckDuckGo
  • FileManagerTool - File system operations

Adding Custom Tools

Create a new tool by extending the BaseTool class:

from cogency.tools.base import BaseTool

class WeatherTool(BaseTool):
    def __init__(self):
        super().__init__(
            name="weather",
            description="Get current weather for a location"
        )
    
    def run(self, location: str) -> dict:
        # Your implementation here
        return {"temperature": 72, "condition": "sunny"}

Tools are automatically discovered and available to agents.

LLM Support

Currently supports Google Gemini:

from cogency.llm import GeminiLLM

# Simple usage
llm = GeminiLLM(api_key="your-key")

# With key rotation
from cogency.llm import KeyRotator
keys = ["key1", "key2", "key3"]
llm = GeminiLLM(key_rotator=KeyRotator(keys))

Execution Tracing

Enable detailed tracing to see your agent's reasoning:

# Simple trace viewing
result = agent.run("Complex task", enable_trace=True, print_trace=True)

# Or capture trace data
result = agent.run("Complex task", enable_trace=True)
trace_data = result["execution_trace"]

Example trace output:

--- Execution Trace (ID: abc123) ---
PLAN     | Need to calculate and then search for information
REASON   | TOOL_CALL: calculator(operation='multiply', num1=15, num2=23)
ACT      | calculator -> {'result': 345}
REFLECT  | Calculation completed, now need to search
REASON   | TOOL_CALL: web_search(query='AI developments 2025')
ACT      | web_search -> {'results': [...]}
REFLECT  | Found relevant search results
RESPOND  | 15 multiplied by 23 equals 345. Recent AI developments include...
--- End Trace ---

Error Handling

All tools include built-in validation and graceful error handling:

# Invalid operations are caught and handled
result = agent.run("Calculate abc + def")
# Agent will respond with helpful error message instead of crashing

CLI Usage

Run examples from the command line:

cd python
python examples/basic_usage.py

Development

Running Tests

pytest

Project Structure

cogency/
├── agent.py          # Core agent implementation
├── llm/              # LLM integrations
├── tools/            # Built-in tools
├── utils/            # Utilities and formatting
└── tests/            # Test suite (115+ tests)

Emergent Behavior

The key insight behind Cogency is that clean architectural separation enables emergent reasoning. When agents fail with one tool, they automatically reflect and try different approaches:

# Agent fails with poor search query, reflects, and tries again
result = agent.run("Tell me about recent AI developments")

# Trace shows:
# 1. Initial search with generic query
# 2. Poor results returned
# 3. Agent reflects on failure
# 4. Adapts query strategy
# 5. Succeeds with better results

This behavior emerges from the Plan → Reason → Act → Reflect → Respond loop, not from explicit programming.

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cogency-0.2.2.tar.gz (18.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cogency-0.2.2-py3-none-any.whl (25.3 kB view details)

Uploaded Python 3

File details

Details for the file cogency-0.2.2.tar.gz.

File metadata

  • Download URL: cogency-0.2.2.tar.gz
  • Upload date:
  • Size: 18.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.10 Darwin/24.5.0

File hashes

Hashes for cogency-0.2.2.tar.gz
Algorithm Hash digest
SHA256 8d88b227e5cf4bd3a4482527b6098e2bf3301bf53a123ea815888fec5ee928ea
MD5 99583b101eb7ad7ef2c1071678f5d56e
BLAKE2b-256 4b592fa2bf9495830d6e1b5b88d162cc0ed68d6e00b83f4c09c6abc133f6a82a

See more details on using hashes here.

File details

Details for the file cogency-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: cogency-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 25.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.10 Darwin/24.5.0

File hashes

Hashes for cogency-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9f49ab482b1ec549ae0a198d833b0718fa343f7877980a8089b9172c4a2400d9
MD5 94ac29ea50e65ff7388d95c775d012c3
BLAKE2b-256 14d23f067b4d833bc65b0430482a14fb4404d9d8e5edff88b2bdc4b329385309

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page