Skip to main content

Neuronum SDK

Project description

Neuronum

Neuronum SDK

Website Documentation PyPI Version
Python Version License


Getting Started with the Neuronum SDK

In this brief getting started guide, you will:


About the Neuronum SDK

The Neuronum SDK is a ready-to-use E2EE data infrastructure to self-host your favorite AI model as agentic backend for the Neuronum client (kybercell) and your own custom clients

Requirements

  • Python >= 3.8
  • A Community or Business Cell (secure identity)

Connect To Neuronum

Installation

pip install neuronum

Create a Neuronum Cell
The Neuronum Cell is your secure identity to interact with the Network

neuronum create-cell

Connect your Cell

neuronum connect-cell

Deploy with Neuronum Server

Neuronum Server is an agent-wrapper that transforms your model into an agentic backend server that can interact with the Neuronum Client (download kybercell) and your custom clients

Requirements

  • Python 3.8+
  • CUDA-compatible GPU (required for vLLM)
  • CUDA Toolkit installed

Quick Start (Recommended)

The easiest way to set up and run an agent via Neuronum server is using the CLI:

neuronum serve-agent

This interactive command will:

  • Clone the neuronum-server repository
  • Configure the agent with your Cell mnemonic
  • Let you choose the LLM model
  • Optionally configure advanced settings
  • Create a virtual environment
  • Install all dependencies
  • Start the vLLM server in the background
  • Launch the agent

Manual Setup

Alternatively, you can set up the agent manually:

  1. Clone the Agent Repository
git clone https://github.com/neuronumcybernetics/neuronum-server.git
cd neuronum-server
  1. Configure the Agent

Edit the server.config file and set your Cell mnemonic:

MNEMONIC = "your twelve word mnemonic phrase here"

You can also customize other settings like:

  • VLLM_MODEL_NAME - LLM model to use (default: Qwen/Qwen2.5-3B-Instruct)
  • MODEL_MAX_TOKENS, MODEL_TEMPERATURE, MODEL_TOP_P - LLM parameters
  • DB_PATH - SQLite database location for memory and knowledge storage
  1. Choose Your Setup Method

Option A: Automated Setup Script (Recommended)

Run the automated setup script:

./setup.sh

This script will automatically:

  • Create a Python virtual environment
  • Install all dependencies
  • Start the vLLM server in the background
  • Launch the agent in the background
  • Allow you to safely close your terminal session

Option B: Step-by-Step Manual Setup

If you prefer to run each step manually:

  1. Create and activate a virtual environment:
python3 -m venv venv
source venv/bin/activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Start the vLLM server in the background:
nohup python start_vllm_server.py > vllm_server.log 2>&1 &
echo $! > .vllm_pid
  1. Run the agent in the background:
nohup python server.py > server.log 2>&1 &
echo $! > .server_pid

What the Agent Does

Once running, the agent will:

  • Connect to the Neuronum network using your Cell credentials
  • Initialize a local SQLite database for conversation memory and knowledge storage
  • Auto-discover and launch any MCP servers placed in the tools/ directory
  • Start processing messages from the network
  • Execute scheduled tasks defined in the tasks/ directory

Stopping the Agent

To gracefully stop the agent and vLLM server:

neuronum stop-agent

This command will:

  • Find and stop the running server.py process
  • Stop the vLLM server running in the background
  • Clean up PID files
  • Allow you to confirm before stopping each process

Call your Agent

Communicate with your Agent using Neuronum Transmitters (TX) with different message types

import asyncio
from neuronum import Cell

async def main():

    async with Cell() as cell:

        # ============================================
        # Example 1: Send a prompt to your Agent
        # ============================================
        prompt_data = {
            "type": "prompt",
            "prompt": "Explain what a black hole is in one sentence"
        }
        tx_response = await cell.activate_tx(prompt_data)
        print(tx_response)

        # ============================================
        # Example 2: Call a Tool with natural language
        # ============================================
        tool_call_data = {
            "type": "call_tool",
            "tool_id": "your-tool-id",  # The tool you want to use
            "prompt": "Send an email to john@example.com with subject 'Meeting' and body 'See you at 3pm'"
        }
        tx_response = await cell.activate_tx(tool_call_data)
        print(tx_response)

        # ============================================
        # Example 3: Knowledge Management
        # ============================================

        # Add knowledge to agent's database
        add_knowledge_data = {
            "type": "add_knowledge",
            "knowledge_topic": "Company Policy",
            "knowledge_data": "Our company operates from 9 AM to 5 PM Monday through Friday."
        }
        tx_response = await cell.activate_tx(add_knowledge_data)

        # Update existing knowledge
        update_knowledge_data = {
            "type": "update_knowledge",
            "knowledge_id": "12345",  # ID from previous add
            "knowledge_data": "Updated: Company operates 8 AM to 6 PM Monday through Friday."
        }
        tx_response = await cell.activate_tx(update_knowledge_data)

        # Fetch all knowledge
        fetch_data = {"type": "fetch_all_knowledge"}
        knowledge_list = await cell.activate_tx(fetch_data)
        print(knowledge_list)

        # Delete knowledge
        delete_knowledge_data = {
            "type": "delete_knowledge",
            "knowledge_id": "12345"
        }
        tx_response = await cell.activate_tx(delete_knowledge_data)

        # ============================================
        # Example 4: Tool Management
        # ============================================

        # Get all installed tools and tasks
        get_tools_data = {"type": "get_tools"}
        tools_info = await cell.activate_tx(get_tools_data)
        print(tools_info)

        # Add a tool (requires tool to be published)
        # Use stream() instead of activate_tx() to listen for agent restart
        add_tool_data = {
            "type": "add_tool",
            "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299"
        }
        await cell.stream(cell.host, add_tool_data)
        # Agent will restart and send "ping" when ready

        # Delete a tool
        delete_tool_data = {
            "type": "delete_tool",
            "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299"
        }
        await cell.stream(cell.host, delete_tool_data)

        # ============================================
        # Example 5: Task Scheduling (Automated Workflows)
        # ============================================

        # Add a scheduled task
        add_task_data = {
            "type": "add_task",
            "name": "Daily Report",
            "description": "Send daily summary email",
            "tool_id": "email-tool-id",
            "function_name": "send_email",
            "input_type": "prompt",  # or "static"
            "input_data": "Send daily summary to manager@company.com",
            "schedule": "weekdays@1704067200,1704153600"  # Days@Unix timestamps
        }
        await cell.stream(cell.host, add_task_data)

        # Delete a task
        delete_task_data = {
            "type": "delete_task",
            "task_id": "task-uuid-here"
        }
        await cell.stream(cell.host, delete_task_data)

        # ============================================
        # Example 6: Agent Status & Logs
        # ============================================

        # Check if agent is running
        status_data = {"type": "get_agent_status"}
        status = await cell.activate_tx(status_data)
        print(status)  # Returns: {"json": "agent running"}

        # Download agent logs
        log_data = {"type": "download_log"}
        logs = await cell.activate_tx(log_data)
        print(logs["json"]["log"])  # Full log content

if __name__ == '__main__':
    asyncio.run(main())

Create a Tool

Neuronum Tools are MCP-compliant (Model Context Protocol) plugins that extend your Agent's functionality, enabling it to interact with external data sources and systems.

Initialize a Tool

neuronum init-tool

You will be prompted to enter a tool name and description (e.g., "Test Tool" and "A simple test tool"). This will create a new folder named using the format: Tool Name_ToolID (e.g., Test Tool_019ac60e-cccc-7af5-b087-f6fcf1ba1299)

This folder will contain 2 files:

  1. tool.config - Configuration and metadata for your tool
  2. tool.py - Your Tool/MCP server implementation

Example tool.config:

{
  "tool_meta": {
    "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299",
    "version": "1.0.0",
    "name": "Test Tool",
    "description": "A simple test tool",
    "audience": "private",
    "logo": "https://neuronum.net/static/logo_new.png"
  },
  "legals": {
    "terms": "https://url_to_your/terms",
    "privacy_policy": "https://url_to_your/privacy_policy"
  },
  "requirements": [],
  "variables": []
}

Example tool.py:

from mcp.server.fastmcp import FastMCP

# Create server instance
mcp = FastMCP("simple-example")

@mcp.tool()
def echo(message: str) -> str:
    """Echo back a message"""
    return f"Echo: {message}"

if __name__ == "__main__":
    mcp.run()

Update a Tool

After modifying your tool.config or tool.py files, submit the updates using:

neuronum update-tool

Delete a Tool

neuronum delete-tool

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuronum-2025.12.0.dev3.tar.gz (24.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuronum-2025.12.0.dev3-py3-none-any.whl (21.8 kB view details)

Uploaded Python 3

File details

Details for the file neuronum-2025.12.0.dev3.tar.gz.

File metadata

  • Download URL: neuronum-2025.12.0.dev3.tar.gz
  • Upload date:
  • Size: 24.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for neuronum-2025.12.0.dev3.tar.gz
Algorithm Hash digest
SHA256 40ce689a4af20def1abf7e875e5a8922f3a6af6ec4971261c1c6143dba8335ea
MD5 64b0473d7dfb35f5feeedfd10f32a7cb
BLAKE2b-256 1934730546f820bcf14bdcaa713356173e49d14a232f33beec6d13a725ec11ff

See more details on using hashes here.

File details

Details for the file neuronum-2025.12.0.dev3-py3-none-any.whl.

File metadata

File hashes

Hashes for neuronum-2025.12.0.dev3-py3-none-any.whl
Algorithm Hash digest
SHA256 e5dc6cdbb5fc0051faaebcbaac06b477554b0a9d6595cb792dce7f6e510f6537
MD5 c37e5cb5185776f704fdaaeb328e0ccf
BLAKE2b-256 7f8ffed131ace1ec1ecfc0c18b864d66d701d07fc607062ac2abeb2d2da7b65b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page