Skip to main content

A package for building AI agents that can be easily embedded into existing software stacks.

Project description

Agent-Assembly-Line

Build AI agents in Python. A library for developers.

CircleCI

Agent-Assembly-Line is a framework for building AI agents that can be easily embedded into existing software stacks. Includes ready-to-use components for task-based and conversational agents.

Agent-Assembly-Line offers components that simplify the setup of agents and multi-agent chains and works with local and cloud based LLMs.

Agent-Assembly-Line supports:

  • Task based Agents (Functional Agents)
  • Conversational Agents
  • Local memory
  • RAG for Local documents or remote endpoints
  • Websites, RSS, JSON, PDF, ..
  • Local LLMs as well as cloud-based LLMs: Ollama and ChatGPT
  • Streaming mode and regular runs
  • Context
  • Micros: small, single task agents that handle distinct functionalities and can be chained
  • cli-agents: agents that can be chained on the command line

Agent-Assembly-Line comes with examples such as Semantic Unittests, Diff Analysis, and more; a demo chat app and tests.

Table of Content

Example

Create an agent for fetching the weather in Helsinki

from agent_assembly_line import FmiWeatherAgent

agent = FmiWeatherAgent("Helsinki", forecast_hours=24, mode="local")
result = agent.run()

Output: "The rest of today in Helsinki will be sunny and mild with temperatures around 4 degrees Celsius. Expect clear skies throughout the evening and overnight.

Getting Started

Install Python Environment

/usr/bin/python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

It works with Python 3.9.6

Install Service

cp agent-assembly-line-service.service /etc/systemd/system/agent-assembly-line.service
sudo systemctl enable agent-assembly-line.service
systemctl is-enabled agent-assembly-line.service


journalctl -u agent-assembly-line.service

Tests

Run all tests:

make test

or

python -m unittest tests/test.py

just one test:

python -m unittest tests.async.test_memory.TestMemory.test_save_messages

Build and Run the Demo App

The demo app provides a UI that talks to the REST API and can handle chat-based conversations, functional agents, memory and summaries, file upload and urls.

cd app/
npm run electron

Setup an LLM

You can use a local LLM as well as cloud-based LLMs. Currently supported are Ollama and OpenAI, more to come.

Note:

Choosing between a local or cloud LLM depends on your specific needs: local LLMs offer greater control, privacy, and potentially lower costs for frequent use, while cloud LLMs provide easy scalability, access to powerful models, and reduced maintenance overhead. Consider your requirements for data security, performance, and budget when making your decision.

Ollama

To use an Ollama LLM, use the ollama identifier:

ollama:gemma2:latest
ollama:codegemma:latest

Make sure you have Ollama installed on your machine:

Download Ollama

Then run it once on your console, it will download your model:

ollama run gemma2

Important: you need to pull the embeddings:

ollama pull nomic-embed-text

You might also want to set the OLLAMA_HOST env variable in case your Ollama isn't listening on default 127.0.0.1:11434

ChatGPT/OpenAI

You can use ChatGPT as an LLM by using the openai identifier:

openai:gpt-3.5-turbo-instruct
openai:gpt-4o

You need to set your OpenAI API Key before running:

export OPENAI_API_KEY=<your key here>

Note:

Using the OpenAI API may incur costs. Please refer to the OpenAI pricing page for more details.

Usage

Simple API

Create an agent:

agent = Agent("aethelland-demo")
question = "How many people live in the country?"
text = agent.run(question)

Agent objects can either read its configuration from a YAML config file like the example, or use a dictionary:

    config = Config()
    config.load_conf_dict({
        "name": "my-demo",
        "llm": {
            "model-identifier": "ollama:gemma:latest",
            "embeddings": "nomic-embed-text"
        },
    })
    agent = Agent(config=config)

The agent supports both streaming and synchronous runs and can store a history, which is useful for chat-based applications. Texts from documents, URLs, or strings can be stored in vectorstores or used as inline context. Inline context directly provides the text to the LLM prompt but is limited by the LLM's context window.

Micros

Micros are functional agents that serve 1 particular job, and can be used in pipes or chains. There are currently agents for analyzing diffs, semantic unittest validation, summarizing text and handling websites.

Website Summary

agent = WebsiteSummaryAgent(url)
summary = agent.run()

Semantic Unittests

class TestTextValidator(SemanticTestCase):
    def test_semantic(self):
        self.assertSemanticallyEqual("Blue is the sky.", "The sky is blue.")

Diff Analysis

This example first creates a detailed textual summary, and in the second step it creates a shorter summary, which can be used e.g. in commit messages.

agent = DiffDetailsAgent(diff_text)
detailed_answer = agent.run()

sum_agent = DiffSumAgent(detailed_answer)
sum_answer = sum_agent.run()
git diff HEAD | cli_agents/diff_details

Text Summary

Generic text summarization. You can choose local LLMs or cloud (CHatGPT).

agent = SumAgent(text, mode='local')
result = agent.run()

For further understanding of how to use Agent-Assembly-Line, the tests can be read, as well as the demo app and examples.

The demo app also shows how the library can be used in a chat application. See also Build and run the demo app.

Multi-Agent Chains

Micros can be used in combination, to build a [complex..]

Example:

git diff --cached | examples/diff_analysis.py | examples/summarize_text.py

Or in Python:

agent = DiffDetailsAgent(diff_text)
detailed_answer = agent.run()

sum_agent = DiffSumAgent(detailed_answer)
sum_answer = sum_agent.run("Please summarize these code changes in 2-3 sentences.  The context is only for the bigger picture.")

sum_agent = DiffSumAgent(sum_answer)
sum_answer = sum_agent.run()

Contributing

We welcome contributions to the Agent-Assembly-Line project! To contribute, please follow these steps:

  • Create a new branch for your feature or bugfix.
  • Make your changes and commit them to the branch.
  • Push your changes and create a PR.
  • Discuss if needed.

Reporting Issues

If you encounter any issues or have feature requests, please open an issue on GitHub. Provide as much detail as possible to help us understand and resolve the issue quickly.

License

This project is licensed under the Apache 2 License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_assembly_line-0.0.1.tar.gz (42.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_assembly_line-0.0.1-py3-none-any.whl (55.9 kB view details)

Uploaded Python 3

File details

Details for the file agent_assembly_line-0.0.1.tar.gz.

File metadata

  • Download URL: agent_assembly_line-0.0.1.tar.gz
  • Upload date:
  • Size: 42.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for agent_assembly_line-0.0.1.tar.gz
Algorithm Hash digest
SHA256 c8b641cc8d62d592501c4fea5f77fd0b6d6e3ecdd5e4e1e64054f5c8ce277a7a
MD5 36b94e1a0a687e70b789022d4896843f
BLAKE2b-256 38978e660c42d9a8b9a11b0b31ad5ca5bf60725ac9850545940764aae841de04

See more details on using hashes here.

File details

Details for the file agent_assembly_line-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_assembly_line-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1c655d27bbc789362965c3835fec211f45fe061f8da7bde9e8c994fde7c6bb94
MD5 b787234a95cc4abc811ab92d94a480f7
BLAKE2b-256 9cec7bcca07c4275c1ef32f2615f61cd4e9ad1b82f726cde98949b75b447719b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page