Skip to main content

Nagato-AI is an intuitive AI Agent library that works across multiple LLMs

Project description

X (formerly Twitter) GitHub

Nagato-AI

Official Nagato AI Poster

Nagato-AI is an intuitive AI Agent library that works across multiple LLMs.

Currently it supports OpenAI's GPT and Anthpropic's Claude LLMs. You can create agents from any of the aforementioned family of models and combine them together to build the most effective AI Agent system you desire.

The name Nagato is inspired from the popular anime Naruto. In Naruto, Nagato is a very powerful ninja who possesses special eyes (Rinnegan) that gives him immense powers. Nagato's powers enable him to control multiple bodies endowed with different abilities. Nagato is also able to see through the eyes of all the bodies which he controls, thereby minimising blindspots that opponents may want to exploit.

Therefore, you can think of Nagato as the linchpin that summons and coordinates AI Agents which have a specific mission to complete.

Note that from now on I will use the terms Nagato and Nagato-AI interchangibly to refer to this library.

How to use Nagato-AI

Installing Poetry dependency management

First, ensure that you have poetry packaging/dependency management installed on your machine. Once poetry is installed, then simply run the following command in your termninal (from the root folder of nagato code base) to install all required dependencies:

poetry install

Running Python code

Assuming your program's entrypoint is defined in a file called main.py, you can run it by typing the following command:

poetry run python main.py

Examples of AI Agent configuration

Nagato is built with flexibility at its core, so you could program it using your paradigm of choice. However these are some of the ways I've seen people use Nagato so far.

Coordinator, worker, and critic agents

In this configuration we have the following:

  • 🎯 Coordinator: breaks down a problem statement (from stdin) into an objective and suggests tasks
  • 📚 Researcher: works on a task by performing research
  • ✅ Critic: evaluates whether the task was completed

Example setup for this configuration could look like this:

openai_client = OpenAI(
    organization="<org-id>",
    api_key="<api-key>",
)

anthropic_api_key = "<api-key>"
anthropic_client = Anthropic(api_key=anthropic_api_key)

coordinator_agent: Agent = AnthropicAgent(
    anthropic_client,
    "claude-3-opus-20240229",
    "Coordinator",
    COORDINATOR_SYSTEM_PROMPT,
    "Coordinator Agent",
)

researcher_agent: Agent = AnthropicAgent(
    anthropic_client,
    "claude-3-sonnet-20240229",
    "Researcher",
    RESEARCHER_SYSTEM_PROMPT,
    "Researcher Agent",
)

critic_agent: Agent = AnthropicAgent(
    anthropic_client,
    "claude-3-opus-20240229",
    "Critic",
    CRITIC_SYSTEM_PROMPT,
    "Critic Agent",
)

...

The full blow example is available here

Worker and critic agents

In this configuration we directly submit as input an objective and a set of tasks needed to complete the objective. Therefore we can skip the coordinator agent and have the worker agent(s) work on the tasks, while the critic agent evaluates whether the task carried out meets the requirements originally specified.

task_list: List[Task] = [
    Task(
        goal="Fetch last 100 user tweets",
        description="Fetch the tweets from the user using the Twitter API. Limit the number of tweets fetched to 100 only."),
    Task(
        goal="Perform sentiment analysis on the tweets",
        description="Feed the tweets to the AI Agent to analyze sentiment per overall sentiment acoss tweets. Range of values for sentiment can be: Positive, Negative, or Neutral"
    )]

openai_client = OpenAI(
    organization="<org-id>",
    api_key="<api-key>",
)

anthropic_api_key = "<api-key>"
anthropic_client = Anthropic(api_key=anthropic_api_key)

researcher_agent: Agent = AnthropicAgent(
    anthropic_client,
    "claude-3-sonnet-20240229",
    "Researcher",
    RESEARCHER_SYSTEM_PROMPT,
    "Researcher Agent",
)

critic_agent: Agent = AnthropicAgent(
    anthropic_client,
    "claude-3-opus-20240229",
    "Critic",
    CRITIC_SYSTEM_PROMPT,
    "Critic Agent",
)

for task in task_list:
    # Insert the task into the prompt
    worker_prompt = ...

    worker_exchange = researcher_agent.chat(worker_prompt, task, 0.7, 2000)

    # insert the response from the agent into prompt for the critic
    critic_prompt = ...

    critic_exchange = critic_agent(critic_prompt, task, 0.7, 2000)

    # Evaluate whether the task was completed based on the answer from the critic agent
    ...

What's next

Nagato is still in its very early development phase. This means that I am likely to introduce breaking changes over the next iterations of the library.

Moreover, there is a lot of functionality currently missing from Nagato. I will remedy this over time. There is no official roadmap per se but I plan to add the following capabilities to Nagato:

  • introduction of tools (e.g. surfing the web)
  • support for function calling (complement to adding tools)
  • support for other LLMs beyond OpenAI's and Anthropic's
  • short/long-term memory for agents
  • LLMOps instrumentation

How can you support

I'd be grateful if you could do some of the following to support this project:

  • star this repository on Github
  • follow me on X/Twitter
  • raise Github issues if you've come across any bug using Nagato or would like a feature to be added to Nagato

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nagatoai_core-0.1.0.tar.gz (15.0 kB view details)

Uploaded Source

Built Distribution

nagatoai_core-0.1.0-py3-none-any.whl (16.1 kB view details)

Uploaded Python 3

File details

Details for the file nagatoai_core-0.1.0.tar.gz.

File metadata

  • Download URL: nagatoai_core-0.1.0.tar.gz
  • Upload date:
  • Size: 15.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.0 CPython/3.12.0 Darwin/23.1.0

File hashes

Hashes for nagatoai_core-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f92d01aa9c897500ed53181040c9acc732389984dff0b322731f228c2a4ef8bf
MD5 7fd5eb23ea752f07a7c922efdaccc78d
BLAKE2b-256 dae70cb3139715dfc48f712340c3888db7e46f8f7d8769a04a16e0829fceec7f

See more details on using hashes here.

File details

Details for the file nagatoai_core-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: nagatoai_core-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 16.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.0 CPython/3.12.0 Darwin/23.1.0

File hashes

Hashes for nagatoai_core-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8d118942e49a973f4f4c309156a5db40790ec00ec88c203f242e3d8dbb61ffcf
MD5 3384eb7742814481a585a32f9ce232f0
BLAKE2b-256 3ea60e039bbade0a581bb775b5fb4f27741a1031e5fed9d2e38c9cb1c664c494

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page