Skip to main content

A lightweight Python AI agent framework

Project description

Axi-EasyAgent

中文文档 | English Documentation

A lightweight Python AI agent framework with conversation management, tool calling, and memory persistence capabilities.

How lightweight is it?

001.png

Why Axi-EasyAgent?

Do you know how much space a typical AI library takes nowadays? Up to 200MB! That's larger than a web browser!
If you only want AI to call your functions, how much of that 200MB do you actually need? The answer is right here.

Features

  • 🤖 Smart Conversations: Streaming conversation support based on OpenAI-compatible APIs
  • 🔧 Tool Calling: Automatically convert Python functions into AI-callable tools
  • 💾 Memory Management: Built-in conversation memory system with persistent storage and auto-compression
  • Async Processing: Full async programming support for improved response efficiency
  • 🔄 Streaming Output: Real-time streaming responses for enhanced user experience

Installation

pip install axi-easyagent

Quick Start

1. Environment Configuration

Configure environment variables, or hardcode them directly in your code (not recommended)

OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
OPENAI_API_KEY=your-api-key-here

Note: easyagent does not automatically read .env files. Please load them yourself.

2. Quick Start Example

import asyncio
from typing import Annotated
from easyagent import Agent

async def get_weather(city: str) -> str:
    """Get weather information"""
    return f"The weather in {city} is sunny"

def get_weather_detail(city: Annotated[str, "Can be precise to district, e.g.: Shanghai/Qingpu District"]) -> str:
    """Get detailed weather information"""
    return f"Detailed weather info for {city}: Temperature 25°C, Humidity 60%"

async def main():
    agent = Agent(
        "deepseek-v4-flash",
        tools=[get_weather, get_weather_detail],
        prompt="Keep responses brief"
    )
    
    while (msg := input("You: ")) != "q":
        async for output in agent.chat(msg):
            print(output, end="")
        print()

if __name__ == "__main__":
    asyncio.run(main())

Step-by-Step Guide

Step 1: Create Memory

Memory is used to store conversation history. You can create a new memory or load existing memory:

from easyagent import Memory

# Create new memory
memory = Memory()
memory.store_turn("Hello!", "Hi! How can I help you?")

# Or load from file
import os

if os.path.exists("./memory.json"):
    memory = Memory.load("./memory.json")

Memory features:

  • Automatically compresses when exceeding length limit (default: 70 messages)
  • Supports saving/loading JSON files
  • Can inherit and customize memory management via IMemory interface

Step 2: Create Agent

Create an AI assistant with various configuration options:

from easyagent import Agent

# Basic creation
agent = Agent("deepseek-v4-flash")

# With memory
agent = Agent("deepseek-v4-flash", memory=memory)

# With system prompt
agent = Agent("deepseek-v4-flash", prompt="You are a helpful assistant")

# With tools
async def get_weather(city: str) -> str:
    """Get weather information"""
    return f"The weather in {city} is sunny"

agent = Agent("deepseek-v4-flash", tools=[get_weather])

# Full configuration
agent = Agent(
    model="deepseek-v4-flash",
    base_url="https://api.example.com/v1",
    api_key="your-api-key",
    memory=memory,
    prompt="Keep responses brief",
    tools=[get_weather],
    max_tool_call=20  # Maximum tool call limit
)

Step 3: Use chat Method

The chat method is the simplest way to interact with the agent, returning only the final output content:

import asyncio
from easyagent import Agent

async def main():
    agent = Agent("deepseek-v4-flash", prompt="Keep responses brief")
    
    while (msg := input("You: ")) != "q":
        async for output in agent.chat(msg):
            print(output, end="")
        print()

if __name__ == "__main__":
    asyncio.run(main())

Features:

  • Returns streaming text output
  • Automatically handles tool calls in the background
  • Suitable for simple conversation scenarios

Step 4: Use execute Method

The execute method provides detailed control over the entire response process, returning AgentEvent objects for each step:

import asyncio
from easyagent import Agent, AgentEvent, StepType

async def main():
    agent = Agent("deepseek-v4-flash", prompt="Keep responses brief")
    
    while (msg := input("You: ")) != "q":
        last_type = None
        async for step in agent.execute(msg):
            # Handle reasoning content
            if step.type == StepType.REASONING:
                if last_type != StepType.REASONING:
                    print()
                    print("Thinking: ", end="")
                print(step.reasoning, end="")
            
            # Handle output content
            elif step.type == StepType.CONTENT:
                if last_type != StepType.CONTENT:
                    print()
                    print("Output: ", end="")
                print(step.content, end="")
            
            # Handle tool call
            elif step.type == StepType.TOOL_CALL:
                print()
                print(f"Tool Call: {step.func.__name__}({step.args})", end="")
            
            # Handle tool result
            elif step.type == StepType.TOOL_RESULT:
                if step.error:
                    print(f" - Error: {step.error}")
                else:
                    print(f" - Result: {step.result}")
            
            last_type = step.type
        print()

if __name__ == "__main__":
    asyncio.run(main())

Event types (StepType):

  • REASONING: Model thinking process
  • CONTENT: Model output content
  • TOOL_CALL: Tool being called
  • TOOL_RESULT: Tool execution result (success or error)

Advantages:

  • Real-time display of thinking process
  • Monitor tool call details
  • Handle errors gracefully
  • Suitable for complex scenarios requiring detailed control

Core Components

Agent Class

The core agent class responsible for managing conversations, tool calls, and memory.

Parameters:

  • model (str): Model name
  • base_url (str, optional): API base URL
  • api_key (str, optional): API key
  • memory (IMemory, optional): Memory instance (defaults to Memory())
  • prompt (str, optional): System prompt
  • client (httpx.AsyncClient, optional): HTTP client
  • tools (list[Callable | dict], optional): Available tools list
  • other_params (dict, optional): Additional request parameters
  • max_tool_call (int): Maximum tool call limit (default: 20)

Main Methods:

  • chat(message, *, tool_choice="auto"): Async generator that yields content strings
  • execute(message, *, tool_choice="auto", save_memory=True): Async generator that yields AgentEvent objects with detailed execution information

AgentEvent Class

A dataclass representing a single event in the model's response process.

Attributes:

  • type (StepType): Event type (REASONING, TOOL_CALL, TOOL_RESULT, CONTENT)
  • reasoning (str | None): Model's thinking/reasoning content
  • content (str | None): Model's output content
  • func (Callable | None): The tool being called
  • args (dict | None): Arguments passed to the tool
  • result (Any | None): Result from tool execution
  • error (Exception | None): Error from tool execution

StepType Enum

Enumeration of event types in the response process:

  • REASONING: Model is thinking/reasoning
  • TOOL_CALL: Tool is being called
  • TOOL_RESULT: Tool execution completed (with result or error)
  • CONTENT: Model output content

Memory Class

Conversation memory management class that inherits from list, supporting message CRUD operations and persistence. If you want to customize memory management, you can inherit from the IMemory interface and implement relevant methods.

Main Methods:

  • store_turn(user: str, assistant: str): Add a user-assistant message pair
  • build_context(query: str, system: str): Build context for the model
  • store(context: IContext): Store context messages into memory
  • compress(): Compress memory by removing reasoning content and tool call records
  • save(file: str): Save memory to JSON file
  • load(file: str): Load memory from JSON file (class method)

Exception Classes

  • MaxToolCallError: Raised when the maximum tool call limit is exceeded. Contains the context for potential recovery.
  • ModelResponseError: Raised when the model returns an invalid response. Contains the response, payload, and error message.

Utility Functions

  • build_tool(func: Callable): Converts a Python function to OpenAI API tool format, automatically extracting function signature, type hints, and docstring.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

axi_easyagent-0.3.0.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

axi_easyagent-0.3.0-py3-none-any.whl (10.2 kB view details)

Uploaded Python 3

File details

Details for the file axi_easyagent-0.3.0.tar.gz.

File metadata

  • Download URL: axi_easyagent-0.3.0.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for axi_easyagent-0.3.0.tar.gz
Algorithm Hash digest
SHA256 4a02a1d3424613bd9996ff51381cbe91b1d17738aa8e7c623532823b1ca731f1
MD5 6e9e16b2338cd1ac3b8adb0806013a28
BLAKE2b-256 e710e13ba165fdc7fbfd2e62dde46e823e173687f558155b1ac416472a039979

See more details on using hashes here.

File details

Details for the file axi_easyagent-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: axi_easyagent-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 10.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for axi_easyagent-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a382a4c58c60505550e9bc040f73df8e8ff876c4f3dadc46fdc0140d92bb90de
MD5 6961baf86495941649991353ee06dfa4
BLAKE2b-256 408c88e5956e5c57ac8d3be44119aecb5b68f8709dcc7d54e259691417a6e5e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page