Skip to main content

Agentic AI out of the box

Project description

Cogency (Python)

Multi-step reasoning agents with clean architecture

Installation

pip install cogency

Quick Start

from cogency.agent import Agent
from cogency.llm import GeminiLLM
from cogency.tools import CalculatorTool, WebSearchTool, FileManagerTool

# Create agent with multiple tools
llm = GeminiLLM(api_key="your-key")
agent = Agent(
    name="MyAgent", 
    llm=llm, 
    tools=[
        CalculatorTool(), 
        WebSearchTool(), 
        FileManagerTool()
    ]
)

# Execute with tracing
result = agent.run("What is 15 * 23?", enable_trace=True, print_trace=True)
print(result["response"])

Core Architecture

Cogency uses a clean 5-step reasoning loop:

  1. Plan - Decide strategy and if tools are needed
  2. Reason - Select tools and prepare inputs
  3. Act - Execute tools with validation
  4. Reflect - Evaluate results and decide next steps
  5. Respond - Format clean answer for user

This separation enables emergent reasoning behavior - agents adapt their tool usage based on results without explicit programming.

Built-in Tools

  • CalculatorTool - Basic arithmetic operations
  • WebSearchTool - Web search using DuckDuckGo
  • FileManagerTool - File system operations

Adding Custom Tools

Create a new tool by extending the BaseTool class:

from cogency.tools.base import BaseTool

class WeatherTool(BaseTool):
    def __init__(self):
        super().__init__(
            name="weather",
            description="Get current weather for a location"
        )
    
    def run(self, location: str) -> dict:
        # Your implementation here
        return {"temperature": 72, "condition": "sunny"}

Tools are automatically discovered and available to agents.

LLM Support

Currently supports Google Gemini:

from cogency.llm import GeminiLLM

# Simple usage
llm = GeminiLLM(api_key="your-key")

# With key rotation
from cogency.llm import KeyRotator
keys = ["key1", "key2", "key3"]
llm = GeminiLLM(key_rotator=KeyRotator(keys))

Execution Tracing

Enable detailed tracing to see your agent's reasoning:

# Simple trace viewing
result = agent.run("Complex task", enable_trace=True, print_trace=True)

# Or capture trace data
result = agent.run("Complex task", enable_trace=True)
trace_data = result["execution_trace"]

Example trace output:

--- Execution Trace (ID: abc123) ---
PLAN     | Need to calculate and then search for information
REASON   | TOOL_CALL: calculator(operation='multiply', num1=15, num2=23)
ACT      | calculator -> {'result': 345}
REFLECT  | Calculation completed, now need to search
REASON   | TOOL_CALL: web_search(query='AI developments 2025')
ACT      | web_search -> {'results': [...]}
REFLECT  | Found relevant search results
RESPOND  | 15 multiplied by 23 equals 345. Recent AI developments include...
--- End Trace ---

Error Handling

All tools include built-in validation and graceful error handling:

# Invalid operations are caught and handled
result = agent.run("Calculate abc + def")
# Agent will respond with helpful error message instead of crashing

CLI Usage

Run examples from the command line:

cd python
python examples/basic_usage.py

Development

Running Tests

pytest

Project Structure

cogency/
├── agent.py          # Core agent implementation
├── llm/              # LLM integrations
├── tools/            # Built-in tools
├── utils/            # Utilities and formatting
└── tests/            # Test suite (115+ tests)

Emergent Behavior

The key insight behind Cogency is that clean architectural separation enables emergent reasoning. When agents fail with one tool, they automatically reflect and try different approaches:

# Agent fails with poor search query, reflects, and tries again
result = agent.run("Tell me about recent AI developments")

# Trace shows:
# 1. Initial search with generic query
# 2. Poor results returned
# 3. Agent reflects on failure
# 4. Adapts query strategy
# 5. Succeeds with better results

This behavior emerges from the Plan → Reason → Act → Reflect → Respond loop, not from explicit programming.

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cogency-0.2.1.tar.gz (18.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cogency-0.2.1-py3-none-any.whl (25.1 kB view details)

Uploaded Python 3

File details

Details for the file cogency-0.2.1.tar.gz.

File metadata

  • Download URL: cogency-0.2.1.tar.gz
  • Upload date:
  • Size: 18.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.10 Darwin/24.5.0

File hashes

Hashes for cogency-0.2.1.tar.gz
Algorithm Hash digest
SHA256 c54e31a21d341b22aa578f3e580cb1701a3bb815307661a46e69c313e5ffabb4
MD5 ed76691fca0d056fd62f24e93d53e7df
BLAKE2b-256 913a584d4e79e9bfe632d13f65e6c2b2fd2414ea3dc30a8ecd7f498f2ed70030

See more details on using hashes here.

File details

Details for the file cogency-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: cogency-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 25.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.10 Darwin/24.5.0

File hashes

Hashes for cogency-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cef8631d630179318800399c04157b6575dbd619ee15d22e44e9d06abd3ed577
MD5 c791362d611a4626a1d7a2fc74d29f6d
BLAKE2b-256 732654fe38eb887ca75dde9cc55c0f2d699b452be5f14dfe3c45f874dc750c1d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page