Skip to main content

A Python package for creating simple multi-agent workflows using various LLMs. The library extends the capabilities of Simple AI Chat by supporting over 100 LLM providers, structured responses, and multi-agent conversations, enabling easy experimentation, deployment, and orchestration of chatbots.

Project description

Simple AI Agents

Create simple multi-agent workflows using any LLMs - easy to experiment, use and deploy.

The package extends Simple AI Chat by adding support for 100+ LLM providers, structured responses and multiple agents, similar to Autogen. With out of the box handling of LLM requests, session handling and structured response generation, multi-agent conversations can be easily orchestrated with Python to manage the control flow. This results in code which is easy to understand and extend with minimal dependencies.

Note: The package is in active development and the API is subject to change. Feedback and contributions are welcome!

Features

  • Mix and match LLM providers (OpenAI, Huggingface, Ollama, Anthropic and more!).
  • Create and run chats with only a few lines of code!
  • Integrates with instructor to provide structured responses for almost all models.
  • Supports tool usage in models from Open AI, Azure, Anthropic, Bedrock, Vertex AI, Grok and Cerebras as well as selected Github, Together AI and Ollama models.
  • Run multiple independent chats at once or create Autogen like multi-agent conversations.
  • Minimal codebase: no code dives to figure out what's going on under the hood needed!
  • Async and streaming for text response and structured response generation.
  • Interactive CLI.

Getting Started

Install the package using pip:

pip install simple-ai-agents

Set up the necessary environment variables for the LLM providers you want to use. Add the necessary environment variables corresponding to the LLM providers that will be used. Refer to .env.example for an example of the variables to add:

OPENAI_API_KEY=<your openai api key>
OPENAI_ORGANIZATION=<your openai organization>
HUGGINGFACE_API_KEY=<your huggingfacehub api token>

Use the client library easily call various LLM providers. The simplest way to get started is to create a ChatLLMSession. You can configure various options with with the LLMOptions typed dictionary:

from simple_ai_agents.chat_session import ChatLLMSession
from simple_ai_agents.models import LLMOptions

openai = LLMOptions(model="gpt-4o-mini", temperature=0.7)
sess = ChatLLMSession(llm_options=openai)
prompt = "Why is the sky blue?"
response = sess.gen(prompt)

Overview of the main methods:

  • gen: Generate a response synchronously. Supports passing in tools for tool usage.
  • gen_async: Asynchronous version of gen.
  • gen_model: Generate a structured response. Selects the best option for the provider and model.
  • gen_model_async: Asynchronous version of gen_model.
  • stream: Stream the response. Supports passing in tools for tool usage.
  • stream_async: Asynchronous version of stream.
  • stream_model: Stream the structured response. Selects the best option for the provider and model.
  • stream_model_async: Asynchronous version of stream_model.

Creating Agents

ChatAgent extends on ChatLLMSession, by adding simple session handling capabilities, console printing and some nice syntax glue to allow calling the object directly to proxy the gen method. This allows for easy creation of chatbots and multi-agent conversations.

from simple_ai_agents.chat_agent import ChatAgent

chatbot = ChatAgent(system="You are a helpful assistant")
chatbot("Generate 2 random numbers between 0 to 100", console_output=True)
chatbot("Which of the two numbers is bigger?", console_output=True)

console_output provides a convenient way to print the chatbot's response to the console. By default, the chatbot uses the openai provider. To use a different provider, pass the llm_options argument to the ChatAgent constructor. For example, to use the mistral model from ollama:

from simple_ai_agents.models import LLMOptions

mistral: LLMOptions = {
    "model": "ollama/mistral",
    "temperature": 0.7,
    "api_base": "http://localhost:11434",
}
chatbot = ChatAgent(system="You are a helpful assistant", llm_options=mistral)

The CLI offers an easy way to start a local chatbot session similar to Simple AI Chat or Ollama but with support for almost all LLM providers.

See the examples folder for other use cases.

CLI

Ensure that you have the necessary environment variables set up. Usage:

aichat [OPTIONS] [PROMPT]

The CLI supports the following options:

  • --prime: Prime the chatbot with a prompt before starting the chat.
  • --character: The name of the chat agent.
  • --model: Specify the LLM model e.g. gpt-4o-mini, ollama/mistral etc. Uses gpt-4o-mini by default.
  • --temperature: Specify the temperature for the LLM model. Uses 0.7 by default.
  • --system: System prompt.
  • --help

Interactive open-ended chat

aichat --prime

Pass in prompts as arguments

Uses a local instance of the mistral model from ollama to summarize the README file.

cat README.md | aichat --model ollama/mistral "Summarize this file"

Looking for an option that is not available? Open an issue or submit a PR!

Structured responses

To generate a structured response, use the gen_model method:

class Person(BaseModel):
    name: str = Field(description="Name of the person")
    age: int = Field(description="Age of the person")

chatbot = ChatAgent(llm_options=openai)
parsed = chatbot.gen_model(
    "Extract `My name is John and I am 18 years old` into JSON",
    response_model=Person
)

The package automatically selects the best mode to generate JSON for a given provider and model. For the highest quality and reliability of structured responses, choose a model that supports tool usage.

Tool usage is currently supported in Open AI, Azure, Anthropic, Bedrock, Vertex AI and Grok models. Selected Ollama models and Together AI models also support structured response generation. For other providers and models, structured response is obtained by parsing the returned message results. This might result in a lower quality and accuracy of the structured response.

Examples

Development

Poetry

Package management is handled by poetry. Install it by following the instructions here.

Installing packages

After installing poetry, install the project packages by running:

poetry install

Setting up pre-commit hooks

Pre-commit hooks automatically process your code prior to commit, enforcing consistency across the codebase without the need for manual intervention. Currently, these hooks are used:

  • trailing-whitespace: trims trailing whitespace
  • requirements-txt-fixer: sorts entries in requirements.txt
  • black: to format the code consistently
  • isort: to sort all imports consistently
  • flake8: as a linter

All commits should pass the hooks above before being pushed.

# Install the configured hooks
poetry run pre-commit install

# Run the hooks on all files in the repo
poetry run pre-commit run -a

If there are any failures, resolve them first, then stage and commit as usual. Committing will automatically trigger these hooks, and the commit will fail if there are unresolved errors.

Run the tests

poetry run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

simple_ai_agents-0.5.1.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

simple_ai_agents-0.5.1-py3-none-any.whl (22.4 kB view details)

Uploaded Python 3

File details

Details for the file simple_ai_agents-0.5.1.tar.gz.

File metadata

  • Download URL: simple_ai_agents-0.5.1.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.4 Darwin/23.6.0

File hashes

Hashes for simple_ai_agents-0.5.1.tar.gz
Algorithm Hash digest
SHA256 165202d183bc1b4cabfc463d4b464feb66c49dac213fc62bf6bd8ab1ea1dbf7a
MD5 ac3f58767cce67fd006180b3c866bc9d
BLAKE2b-256 37171f0ea5bbc6f453b9d9eb10ee2460644fee8994af5bfa8eca9562af8dc069

See more details on using hashes here.

File details

Details for the file simple_ai_agents-0.5.1-py3-none-any.whl.

File metadata

File hashes

Hashes for simple_ai_agents-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7bc689a89bb453cc1b2faaef5597484b6a3a35d6b99fcb5b8d3a7fc98d6be8e4
MD5 36ca78bfd7d62157d36725b0d6192e84
BLAKE2b-256 93724a30bee70b7d1a8c342e1b85c0a952a35ac5af5d8daa99851f182b7530c1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page