Skip to main content

Python SDK for interacting with Naptha nodes and building distributed AI modules

Project description

Visit naptha.ai Discord

             █▀█                  
          ▄▄▄▀█▀            
          █▄█ █    █▀█        
       █▀█ █  █ ▄▄▄▀█▀      
    ▄▄▄▀█▀ █  █ █▄█ █ ▄▄▄       
    █▄█ █  █  █  █  █ █▄█        ███╗   ██╗ █████╗ ██████╗ ████████╗██╗  ██╗ █████╗ 
 ▄▄▄ █  █  █  █  █  █  █ ▄▄▄     ████╗  ██║██╔══██╗██╔══██╗╚══██╔══╝██║  ██║██╔══██╗
 █▄█ █  █  █  █▄█▀  █  █ █▄█     ██╔██╗ ██║███████║██████╔╝   ██║   ███████║███████║
  █  █   ▀█▀  █▀▀  ▄█  █  █      ██║╚██╗██║██╔══██║██╔═══╝    ██║   ██╔══██║██╔══██║
  █  ▀█▄  ▀█▄ █ ▄█▀▀ ▄█▀  █      ██║ ╚████║██║  ██║██║        ██║   ██║  ██║██║  ██║
   ▀█▄ ▀▀█  █ █ █ ▄██▀ ▄█▀       ╚═╝  ╚═══╝╚═╝  ╚═╝╚═╝        ╚═╝   ╚═╝  ╚═╝╚═╝  ╚═╝
     ▀█▄ █  █ █ █ █  ▄█▀                             Orchestrating the Web of Agents
        ▀█  █ █ █ █ ▌▀                                                 www.naptha.ai
          ▀▀█ █ ██▀▀                                                    

Naptha Python SDK GitHub release (latest by date) PyPI version Documentation

Naptha is a framework and infrastructure for developing and running multi-agent systems at scale with heterogeneous models, architectures and data.

Naptha Modules are the building blocks of multi-agent systems. They are designed to be framework-agnostic, allowing developers to implement modules using different agent frameworks. There are currently seven types of modules: Agents, Tools, Knowledge Bases, Memories, Orchestrators, Environments, and Personas. As shown in the diagram below, modules can also run on separate devices, while still interacting with each other the network.

The Naptha SDK is used within Naptha Modules to facilitate interactions with other modules, and to access model inference and storage (e.g. of knowledge, memories, etc.). The Naptha SDK also acts as a client for interacting with the Naptha Hub (like HuggingFace but for multi-agent apps), and Naptha Nodes (the infrastructure that runs modules).

You can find more information on Naptha Modules, the Naptha SDK and Naptha Nodes in the docs.

If you find this repo useful, please don't forget to star ⭐!

🧩 Installing the SDK

Set up a Virtual Environment

It is good practice to install the SDK in a dedicated virtual environment. We recommend using Poetry to manage your dependencies.

If you don't already have a poetry virtual environment, create a new one:

poetry init --python ">=3.10,<3.13"

Then install the SDK:

poetry add naptha-sdk
source .venv/bin/activate

Alternatively, you can use in-built Python virtual environments:

python -m venv .venv
source .venv/bin/activate
pip install naptha-sdk

🔥 Creating Your Naptha Identity

Your Naptha account is your identity on the Naptha platform. It allows you to:

  • Deploy and run agents, tools, environments and other modules on Naptha Nodes (via a public/private keypair)
  • Access and interact with the Naptha Hub's features and services (via a username and password)

The simplest way to create a new account is through the interactive CLI. Run the following command:

naptha signup

Or if you have already have set up an identity, edit your .env file with your desired credentials:

# .env file
HUB_USERNAME=your_username
HUB_PASSWORD=your_password
PRIVATE_KEY=your_private_key  # Optional - will be generated if not provided

⚙️ Configuring your env file

Choose whether you want to interact with a local or hosted Naptha node. For a local node, set NODE_URL=http://localhost:7001 in the .env file. To use a hosted node, set e.g. NODE_URL=https://node.naptha.ai or NODE_URL=https://node2.naptha.ai in the .env file.

🌐 Interacting with the Naptha Hub

You can use the CLI to see a list of available nodes:

naptha nodes

To see a list of existing agents on the hub you can run:

naptha agents

or naptha tools, naptha kbs, naptha memories, naptha orchestrators, naptha environments, and naptha personas for other types of modules. For each agent, you will see a module url where you can check out the code.

For instructions on registering a new module on the hub, or updating and deleting modules see the docs.

🚀 Running Modules

Now you've found a module you want to run, and you've configured where you want to run the modules (either on a hosted node or locally). You can now use the CLI and run the module.

🤖 Run an Agent

The Hello World Agent is the simplest example of an agent that prints hello:

# usage: naptha run <agent_name> <agent args>
naptha run agent:hello_world_agent -p "firstname=sam surname=altman"

Try running the Simple Chat Agent that uses the local LLM running on your node:

naptha run agent:simple_chat_agent -p "tool_name='chat' tool_input_data='what is an ai agent?'"

You can check out the module code to see how to access model inference, via the Inference API of the Naptha Node. The llm_configs.json file in the configs folder of the module contains the model configurations:

[
    {
        "config_name": "open",
        "client": "ollama",
        "model": "hermes3:8b",
        "temperature": 0.7,
        "max_tokens": 1000,
        "api_base": "http://localhost:11434"
    },
    {
        "config_name": "closed",
        "client": "openai",
        "model": "gpt-4o-mini",
        "temperature": 0.7,
        "max_tokens": 1000,
        "api_base": "https://api.openai.com/v1"
    }
]

The main code for the agent is contained in the run.py file, which imports the InferenceClient class and calls the run_inference method:

from naptha_sdk.inference import InferenceClient

class SimpleChatAgent:
    def __init__(self, deployment: AgentDeployment):
        ...
        # the arg is loaded from configs/deployment.json
        self.node = InferenceClient(self.deployment.node) 
        ...

    async def chat(self, inputs: InputSchema):
        ...
        response = await self.node.run_inference({"model": self.deployment.config.llm_config.model,
                                                    "messages": messages,
                                                    "temperature": self.deployment.config.llm_config.temperature,
                                                    "max_tokens": self.deployment.config.llm_config.max_tokens})

🎭 Run an Agent with a Persona

Below are examples of running the Simple Chat Agent with a twitter/X persona, generated from exported X data:

naptha run agent:simple_chat_agent -p "tool_name='chat' tool_input_data='who are you?'" --config '{"persona_module": {"name": "interstellarninja_twitter"}}'

and from a synthetically generated market persona based on census data:

naptha run agent:simple_chat_agent -p "tool_name='chat' tool_input_data='who are you?'" --config '{"persona_module": {"name": "marketagents_aileenmay"}}'

🛠️ Run a Tool

The Generate Image Tool is a simple example of a Tool module. It is intended to demonstrate how agents can interact with a Tool module that allows them to generate images. You can run the tool module using:

# usage: naptha run <tool_name> -p "<tool args>"
naptha run tool:generate_image_tool -p "tool_name='generate_image_tool' prompt='A beautiful image of a cat'"

🔧 Run an Agent that uses a Tool

The Generate Image Agent is an example of an Agent module that interacts with the Generate Image Tool. You can run the agent module using:

naptha run agent:generate_image_agent -p "tool_name='generate_image_tool' prompt='A beautiful image of a cat'" --tool_nodes "node.naptha.ai"

The name of the tool subdeployment that the agent uses is specified in the configs/deployment.json, and the full details of that tool subdeployment are loaded from the deployment with the same name in the configs/tool_deployments.json file.

// AgentDeployment in deployment.json file 
[
    {
        "node": {"name": "node.naptha.ai"},
        "module": {"name": "generate_image_agent"},
        "config": ...,
        "tool_deployments": [{"name": "tool_deployment_1"}],
        ...
    }
]

# ToolDeployment in tool_deployments.json file
[
    {
        "name": "tool_deployment_1",
        "module": {"name": "generate_image_tool"},
        "node": {"ip": "node.naptha.ai"},
        "config": {
            "config_name": "tool_config_1",
            "llm_config": {"config_name": "model_1"}
        },
    }
]

There is a GenerateImageAgent class in the run.py file, which imports the Tool class and calls the Tool.run method:

from naptha_sdk.schemas import AgentDeployment, AgentRunInput, ToolRunInput
from naptha_sdk.modules.tool import Tool
from naptha_sdk.user import sign_consumer_id

class GenerateImageAgent:
    async def create(self, deployment: AgentDeployment, *args, **kwargs):
        self.deployment = deployment
        self.tool = Tool()
        # the arg below is loaded from configs/tool_deployments.json
        tool_deployment = await self.tool.create(deployment=deployment.tool_deployments[0])
        self.system_prompt = SystemPromptSchema(role=self.deployment.config.system_prompt["role"])

    async def run(self, module_run: AgentRunInput, *args, **kwargs):
        tool_run_input = ToolRunInput(
            consumer_id=module_run.consumer_id,
            inputs=module_run.inputs,
            deployment=self.deployment.tool_deployments[0],
            signature=sign_consumer_id(module_run.consumer_id, os.getenv("PRIVATE_KEY"))
        )
        tool_response = await self.tool.run(tool_run_input)
        return tool_response.results

📚 Run a Knowledge Base

The Wikipedia Knowledge Base Module is a simple example of a Knowledge Base module. It is intended to demonstrate how agents can interact with a Knowledge Base that looks like Wikipedia.

The configuration of a knowledge base module is specified in the deployment.json file in the configs folder of the module.

# KnowledgeBaseConfig in deployment.json file 
[
    {
        ...
        "config": {
            "llm_config": {"config_name": "model_1"},
            "storage_config": {
                "storage_type": "db",
                "path": "wikipedia_kb",
                "options": {
                    "query_col": "title",
                    "answer_col": "text"
                },
                "storage_schema": {
                    "id": {"type": "INTEGER", "primary_key": true},
                    "url": {"type": "TEXT"},
                    "title": {"type": "TEXT"},
                    "text": {"type": "TEXT"}
                }
            }
        }
    }
]

There is a WikipediaKB class in the run.py file that has a number of methods. You can think of these methods as endpoints of the Knowledge Base, which will be called using the run command below. For example, you can initialize the content in the Knowledge Base using:

naptha run kb:wikipedia_kb -p "func_name='init'"

You can list content in the Knowledge Base using:

naptha run kb:wikipedia_kb -p '{
    "func_name": "list_rows",
    "func_input_data": {
        "limit": "10"
    }
}'

You can add to the Knowledge Base using:

naptha run kb:wikipedia_kb -p '{
    "func_name": "add_data",
    "func_input_data": {
        "url": "https://en.wikipedia.org/wiki/Socrates",
        "title": "Socrates",
        "text": "Socrates was a Greek philosopher from Athens who is credited as the founder of Western philosophy and as among the first moral philosophers of the ethical tradition of thought."
    }
}'

You can query the Knowledge Base using:

naptha run kb:wikipedia_kb -p '{
    "func_name": "run_query",
    "func_input_data": {
        "query": "Elon Musk"
    }
}'

You can delete a row from the Knowledge Base using:

naptha run kb:wikipedia_kb -p '{
    "func_name": "delete_row",
    "func_input_data": {
        "condition": {
            "title": "Elon Musk"
        }
    }
}'

You can delete the entire Knowledge Base using:

naptha run kb:wikipedia_kb -p '{
    "func_name": "delete_table",
    "func_input_data": {
        "table_name": "wikipedia_kb"
    }
}'

The Wikipedia KB also instantiates the StorageClient class and calls the execute method with CreateStorageRequest, ReadStorageRequest, DeleteStorageRequest, ListStorageRequest and UpdateStorageRequest objects:

from naptha_sdk.schemas import KBDeployment
from naptha_sdk.storage.schemas import ReadStorageRequest
from naptha_sdk.storage.storage_client import StorageClient

class WikipediaKB:
    def __init__(self, deployment: KBDeployment):
        ...
        # the arg is loaded from configs/deployment.json
        self.storage_client = StorageClient(self.deployment.node)
        self.storage_type = self.config.storage_config.storage_type
        self.table_name = self.config.storage_config.path
        self.schema = self.config.storage_config.storage_schema

    async def run_query(self, input_data: Dict[str, Any], *args, **kwargs):
        read_storage_request = ReadStorageRequest(
            storage_type=self.storage_type,
            path=self.table_name,
            options={"condition": {"title": input_data["query"]}}
        )

        read_result = await self.storage_client.execute(read_storage_request)

🧠 Run an Agent that uses a Knowledge Base

You can run an Agent that interacts with the Knowledge Base using:

naptha run agent:wikipedia_agent -p "func_name='run_query' query='Elon Musk' question='Who is Elon Musk?'" --kb_nodes "node.naptha.ai"

The name of the KB subdeployment that the agent uses is specified in the configs/deployment.json, and the full details of that KB subdeployment are loaded from the deployment with the same name in the configs/kb_deployments.json file.

# AgentDeployment in configs/deployment.json file 
[
    {
        "node": {"name": "node.naptha.ai"},
        "module": {"name": "wikipedia_agent"},
        "config": ...,
        "kb_deployments": [{"name": "kb_deployment_1"}],
        ...
    }
]

# KBDeployment in configs/kb_deployments.json file
[
    {
        "name": "kb_deployment_1",
        "module": {"name": "wikipedia_kb"},
        "node": {"ip": "node.naptha.ai"},
        "config": {
            "llm_config": {"config_name": "model_1"},
            "storage_config": ...
        },
    }
]

There is a WikipediaAgent class in the run.py file, which imports the KnowledgeBase class and calls the KnowledgeBase.run method:

from naptha_sdk.modules.kb import KnowledgeBase
from naptha_sdk.schemas import AgentDeployment, AgentRunInput, KBRunInput
from naptha_sdk.user import sign_consumer_id

class WikipediaAgent:
    async def create(self, deployment: AgentDeployment, *args, **kwargs):
        self.deployment = deployment
        self.wikipedia_kb = KnowledgeBase()
        # the arg below is loaded from configs/kb_deployments.json
        kb_deployment = await self.wikipedia_kb.create(deployment=self.deployment.kb_deployments[0])
        self.system_prompt = SystemPromptSchema(role=self.deployment.config.system_prompt["role"])
        self.inference_client = InferenceClient(self.deployment.node)

    async def run(self, module_run: AgentRunInput, *args, **kwargs):
        ...
        kb_run_input = KBRunInput(
            consumer_id=module_run.consumer_id,
            inputs={"func_name": "run_query", "func_input_data": {"query": module_run.inputs.query}},
            deployment=self.deployment.kb_deployments[0],
            signature=sign_consumer_id(module_run.consumer_id, os.getenv("PRIVATE_KEY"))
        )
        page = await self.wikipedia_kb.run(kb_run_input)
        ...

💭 Run a Memory Module

The Cognitive Memory module is a simple example of a Memory module. It is intended to demonstrate how agents can interact with a Memory module that allows them to store and retrieve cognitive steps such as reflections. You can create a memory table using:

The configuration of a memory module is specified in the deployment.json file in the configs folder of the module.

# MemoryConfig in configs/deployment.json file 
[
    {
        ...
        "config": {
            "storage_config": {
                "storage_type": "db",
                "path": "cognitive_memory",
                "storage_schema": {
                    "memory_id": {"type": "INTEGER", "primary_key": true},
                    "cognitive_step": {"type": "TEXT"},
                    "content": {"type": "TEXT"},
                    "created_at": {"type": "TEXT"},
                    "metadata": {"type": "jsonb"}
                },
                "options": {
                    "query_col": "title",
                    "answer_col": "text"
                }
            }
        }
    }
]

There is a CognitiveMemory class in the run.py file that has a number of methods. You can think of these methods as endpoints of the Memory, which will be called using the run command below. For example, you can initialize the table in Memory using:

naptha run memory:cognitive_memory -p "func_name='init'"

You can add to the memory table using:

naptha run memory:cognitive_memory -p '{
    "func_name": "store_cognitive_item",
    "func_input_data": {
        "cognitive_step": "reflection",
        "content": "I am reflecting."
    }
}'

You can query the memory table using:

naptha run memory:cognitive_memory -p '{
    "func_name": "get_cognitive_items",
    "func_input_data": {
        "cognitive_step": "reflection"
    }
}'

You can delete a row in the memory table using:

naptha run memory:cognitive_memory -p '{
    "func_name": "delete_cognitive_items",
    "func_input_data": {
        "condition": {"cognitive_step": "reflection"}
    }
}'

🎮 Run an Orchestrator

The Multiagent Chat Orchestrator is an example of an Orchestrator module that interacts with simple chat Agent modules and a groupchat Knowledge Base module. The orchestrator, agents and knowledge base can all run on different nodes. You can run the orchestrator module on hosted nodes using:

The names of the Agent and KB subdeployments that the orchestrator uses are specified in the configs/deployment.json, and the full details of those subdeployments are loaded from the deployments with the same name in the configs/agent_deployments.json and configs/kb_deployments.json files.

# OrchestratorDeployment in configs/deployment.json file 
[
    {
        "node": {"name": "node.naptha.ai"},
        "module": {"name": "multiagent_chat"},
        "config": ...,
        "agent_deployments": [
            {"name": "agent_deployment_1"},
            {"name": "agent_deployment_2"}
        ],
        "kb_deployments": [{"name": "groupchat_kb_deployment_1"}]
        ...
    }
]

# AgentDeployments in configs/agent_deployments.json file
[
    {
        "name": "agent_deployment_1",
        "module": {"name": "simple_chat_agent"},
        "node": {"ip": "node.naptha.ai"},
        "config": {
            "config_name": "agent_config_1",
            "llm_config": {"config_name": "model_1"},
            "system_prompt": ...
        }
    },
    {
        "name": "agent_deployment_2",
        "module": {"name": "simple_chat_agent"},
        "node": {"ip": "node.naptha.ai"},
        "config": {
            "config_name": "agent_config_2",
            "llm_config": {"config_name": "model_2"},
            "system_prompt": ...
        }
    }
]

# KBDeployment in configs/kb_deployments.json file
[
    {
        "name": "groupchat_kb_deployment_1",
        "module": {"name": "groupchat_kb"},
        "node": {"ip": "node.naptha.ai"},
        "config": {
            "storage_config": ...
        },
    }
]

There is a MultiAgentChat class in the run.py file, which imports the Agent and KnowledgeBase classes and calls the Agent.run and KnowledgeBase.run methods:

from naptha_sdk.modules.agent import Agent
from naptha_sdk.modules.kb import KnowledgeBase
from naptha_sdk.schemas import OrchestratorRunInput, OrchestratorDeployment, KBRunInput, AgentRunInput
from naptha_sdk.user import sign_consumer_id

class MultiAgentChat:
    async def create(self, deployment: OrchestratorDeployment, *args, **kwargs):
        self.deployment = deployment
        self.agent_deployments = self.deployment.agent_deployments
        self.agents = [Agent(), Agent()]
        # the arg below is loaded from configs/agent_deployments.json
        agent_deployments = [await agent.create(deployment=self.agent_deployments[i], *args, **kwargs) for i, agent in enumerate(self.agents)]
        self.groupchat_kb = KnowledgeBase()
        # the arg below is loaded from configs/kb_deployments.json
        kb_deployment = await self.groupchat_kb.create(deployment=self.deployment.kb_deployments[0], *args, **kwargs)

    async def run(self, module_run: OrchestratorRunInput, *args, **kwargs):
        ...
        for round_num in range(self.orchestrator_deployment.config.max_rounds):
            for agent_num, agent in enumerate(self.agents):
                    agent_run_input = AgentRunInput(
                        consumer_id=module_run.consumer_id,
                        inputs={"tool_name": "chat", "tool_input_data": messages},
                        deployment=self.agent_deployments[agent_num],
                        signature=sign_consumer_id(module_run.consumer_id, os.getenv("PRIVATE_KEY"))
                    )
                    response = await agent.run(agent_run_input)

You can run the orchestrator module using (note that using the --agent_nodes and --kb_nodes flags overrides the values in the deployment.json file instead):

# usage: naptha run <orchestrator_name> -p "<orchestrator args>" --agent_nodes "<agent nodes>" --kb_nodes "<kb nodes>"
naptha run orchestrator:multiagent_chat -p "prompt='i would like to count up to ten, one number at a time. ill start. one.'" --agent_nodes "localhost,localhost" --kb_nodes "localhost" --config '{"max_rounds": 5}'

🔑 Deploy Secrets

The deploy-secrets command allows you to securely store and manage secrets such as API keys and tokens for use within your Naptha modules. These secrets are encrypted before being stored on the Naptha Hub.

Using the CLI

You can store secrets in the Naptha Hub by running the following command:

naptha deploy-secrets

This will prompt you to enter the name of the secret you want to add, and then the value of the secret.

Using Environment Variables

You can add secrets from your environment variables:

naptha deploy-secrets -e

This will read all variables from your .env file and securely store them on the node.

Override Existing Secrets

To update existing secrets with new values from your environment file:

naptha deploy-secrets -e -o

All modules can now access these secrets by referencing them from the os.environ dictionary.

Security Notes

  • Secrets are encrypted using RSA encryption before being stored
  • Each secret is encrypted with the node's public key before being sent to the node
  • Only the modules running on the node can access the secrets
  • Secrets are never logged or exposed in plaintext

Note: During our initial audit of the deploy-secrets command, we found that while your secrets are not currently visible to other users, this method is not fully secure. We are actively working on implementing a more secure method for storing and accessing secrets in the future.

✨ Creating your own Module

Follow the guide in our docs for creating your first agent. This involves cloning the base module template. You can check out other examples of agents and other modules at https://github.com/NapthaAI.

💻 Running Agents locally on your own Naptha Node

You can run your own Naptha node, and earn rewards for running agents and other modules. Follow the instructions at https://github.com/NapthaAI/naptha-node.

👥 Community

🔗 Links

💰 Bounties and Microgrants

Have an idea for a cool use case to build with our SDK? Get in touch at team@naptha.ai.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

naptha_sdk-1.0.5.tar.gz (53.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

naptha_sdk-1.0.5-py3-none-any.whl (55.9 kB view details)

Uploaded Python 3

File details

Details for the file naptha_sdk-1.0.5.tar.gz.

File metadata

  • Download URL: naptha_sdk-1.0.5.tar.gz
  • Upload date:
  • Size: 53.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for naptha_sdk-1.0.5.tar.gz
Algorithm Hash digest
SHA256 72cde988ce016dad664ffcb988f3ac24a439fe488c60e6af483e1ee3ad457cfb
MD5 c416be03a3bdd234e69ad1a0d49b32e3
BLAKE2b-256 d953623eb54521ce3e688ef1af0ab9ae4da5a2c080c5f301026390370fe88c27

See more details on using hashes here.

Provenance

The following attestation bundles were made for naptha_sdk-1.0.5.tar.gz:

Publisher: python-publish.yml on NapthaAI/naptha-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file naptha_sdk-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: naptha_sdk-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 55.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for naptha_sdk-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 73f551b8cba417783bf23e3e0d4aa554bfcb6317254d7fb45c25cd5cde1d9605
MD5 fec559e3fead09f5aad16b0eb8e56d46
BLAKE2b-256 37169c9e821c16d9b535a7f99dc9b3a4c52c35942844381b340a023c280e18ec

See more details on using hashes here.

Provenance

The following attestation bundles were made for naptha_sdk-1.0.5-py3-none-any.whl:

Publisher: python-publish.yml on NapthaAI/naptha-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page