Skip to main content

Neuronum SDK

Project description

Neuronum

Neuronum SDK

Website Documentation PyPI Version
Python Version License


About the Neuronum SDK

The Neuronum SDK provides everything you need to setup your favorite AI model as self-hosted agentic backend. It includes the Neuronum Server, an open source agent-wrapper that transforms your model into an executable assistant that can be managed and called with the Neuronum Client API and the Neuronum Tools CLI to develop and publish MCP-compliant Tools that can be installed locally on your Neuronum Server

https://github.com/user-attachments/assets/0e1df98a-e0e9-465f-a602-3b58a82a61ca

Protocol Note: The Neuronum SDK is powered by an end-to-end encrypted communication protocol based on public/private key pairs derived from a randomly generated 12-word mnemonic. All data is relayed through neuronum.net, providing secure communication without the need to set up public web servers or expose your infrastructure to the public internet.

⚠️ Development Status: The Neuronum SDK is currently in early stages of development and is not production-ready. It is intended for development, testing, and experimental purposes only. Do not use in production environments or for critical applications.


Requirements

  • Python >= 3.8
  • CUDA-compatible GPU (for Neuronum Server)
  • CUDA Toolkit (for Neuronum Server)

Table of Contents

In this programmers guide, you will learn how to:


Connect To Neuronum

Installation

Create and activate a virtual environment:

python3 -m venv ~/neuronum-venv
source ~/neuronum-venv/bin/activate

Install the Neuronum SDK:

pip install neuronum==2025.12.0.dev12

Note: Always activate this virtual environment (source ~/neuronum-venv/bin/activate) before running any neuronum commands.

Create a Neuronum Cell (secure Identity)

neuronum create-cell

Connect your Cell

neuronum connect-cell

Neuronum Server

Neuronum Server is an agent-wrapper that transforms your model into an agentic backend server that can interact with the Neuronum Client API and installed tools


Start the Server

neuronum start-server

This command will:

  • Clone the neuronum-server repository (if not already present)
  • Create a Python virtual environment
  • Install all dependencies (vLLM, PyTorch, etc.)
  • Start the vLLM server in the background
  • Launch the Neuronum Server

Check Server Status

neuronum status

This will show if the Neuronum Server and vLLM Server are currently running with their PIDs.

Viewing Logs

tail -f neuronum-server/server.log
tail -f neuronum-server/vllm_server.log

Stopping the Server

neuronum stop-server

What the Server Does

Once running, the server will:

  • Connect to the Neuronum network using your Cell credentials
  • Initialize a local SQLite database for conversation memory and knowledge storage
  • Auto-discover and launch any MCP servers in the tools/ directory
  • Process messages from clients via the Neuronum network
  • Execute scheduled tasks defined in the tasks/ directory

Server Configuration

The server can be customized by editing the neuronum-server/server.config file. Here are the available options:

File Paths:

LOG_FILE = "server.log"              # Server log file location
DB_PATH = "agent_memory.db"          # SQLite database for conversations and knowledge
TASKS_DIR = "./tasks"                # Directory for scheduled tasks

Model Configuration:

MODEL_MAX_TOKENS = 512               # Maximum tokens in responses (higher = longer answers)
MODEL_TEMPERATURE = 0.3              # Creativity (0.0 = deterministic, 1.0 = creative)
MODEL_TOP_P = 0.85                   # Nucleus sampling (lower = more predictable)

vLLM Server:

VLLM_MODEL_NAME = "Qwen/Qwen2.5-3B-Instruct"  # Model to load
                                               # Examples: "Qwen/Qwen2.5-1.5B-Instruct",
                                               #           "meta-llama/Llama-3.2-3B-Instruct"
VLLM_HOST = "127.0.0.1"              # Server host (127.0.0.1 = local only)
VLLM_PORT = 8000                     # Server port
VLLM_API_BASE = "http://127.0.0.1:8000/v1"  # Full API URL

Conversation & Knowledge:

CONVERSATION_HISTORY_LIMIT = 10      # Recent messages to include in context
KNOWLEDGE_RETRIEVAL_LIMIT = 5        # Max knowledge chunks to retrieve
FTS5_STOPWORDS = {...}               # Words to exclude from knowledge search

After modifying the configuration, restart the server for changes to take effect:

neuronum stop-server
neuronum start-server

Neuronum Client API

Manage and call your Agent with "kybercell" (official Neuronum Client) or build your own custom Client using the Neuronum Client API

Python API

import asyncio
from neuronum import Cell

async def main():

    async with Cell() as cell:

        # ============================================
        # Target Cell ID
        # ============================================
        cell_id = "id::cell"

        # ============================================
        # Core Methods
        # ============================================
        # cell.activate_tx(cell_id, data)  - Send request and wait for response
        # cell.stream(cell_id, data)       - Send request via WebSocket (no response)
        # cell.sync()                      - Receive incoming requests
        # cell.tx_response(transmitter_id, data, public_key)  - Send response to a request

        # ============================================
        # Example 1: Send a prompt to your Agent
        # ============================================
        # The agent will answer questions using its knowledge base
        # and can execute tools conversationally when needed
        prompt_data = {
            "type": "prompt",
            "prompt": "Explain what a black hole is in one sentence"
        }
        tx_response = await cell.activate_tx(cell_id, prompt_data)
        print(tx_response)

        # ============================================
        # Example 2: Action Approval Flow
        # ============================================
        # When the agent suggests a tool action, it returns an action_id
        # The client can then approve or decline the action

        # Approve a pending action
        approve_data = {
            "type": "approve",
            "action_id": 123  # ID returned from prompt response
        }
        tx_response = await cell.activate_tx(cell_id, approve_data)
        print(tx_response)

        # Decline a pending action
        decline_data = {
            "type": "decline",
            "action_id": 123
        }
        tx_response = await cell.activate_tx(cell_id, decline_data)
        print(tx_response)

        # ============================================
        # Example 3: Knowledge Management
        # ============================================

        # Add knowledge to agent's database
        upload_knowledge_data = {
            "type": "upload_knowledge",
            "knowledge_topic": "Company Policy",
            "knowledge_data": "Our company operates from 9 AM to 5 PM Monday through Friday."
        }
        tx_response = await cell.activate_tx(cell_id, upload_knowledge_data)

        # Update existing knowledge
        update_knowledge_data = {
            "type": "update_knowledge",
            "knowledge_id": "abc123...",  # SHA256 hash ID from previous add
            "knowledge_data": "Updated: Company operates 8 AM to 6 PM Monday through Friday."
        }
        tx_response = await cell.activate_tx(cell_id, update_knowledge_data)

        # Fetch all knowledge
        get_knowledge_data = {"type": "get_knowledge"}
        knowledge_list = await cell.activate_tx(cell_id, get_knowledge_data)
        print(knowledge_list)
        # Returns: [{"knowledge_id": "...", "topic": "...", "content": "..."}, ...]

        # Delete knowledge
        delete_knowledge_data = {
            "type": "delete_knowledge",
            "knowledge_id": "abc123..."
        }
        tx_response = await cell.activate_tx(cell_id, delete_knowledge_data)

        # ============================================
        # Example 4: Icebreaker (Welcome Message)
        # ============================================

        # Get the icebreaker/welcome message
        get_icebreaker_data = {"type": "get_icebreaker"}
        icebreaker = await cell.activate_tx(cell_id, get_icebreaker_data)
        print(icebreaker)

        # Update the icebreaker message
        update_icebreaker_data = {
            "type": "update_icebreaker",
            "icebreaker": "Welcome! How can I help you today?"
        }
        tx_response = await cell.activate_tx(cell_id, update_icebreaker_data)

        # ============================================
        # Example 5: Tool Management
        # ============================================

        # List all available tools on Neuronum network
        available_tools = await cell.list_tools()
        print(available_tools)
        # Returns list of tools with metadata: [{"tool_id": "...", "name": "...", "description": "..."}, ...]

        # Get all installed tools on your agent
        get_tools_data = {"type": "get_tools"}
        tools_info = await cell.activate_tx(cell_id, get_tools_data)
        print(tools_info)
        # Returns: {"tools": {"tool_id": {config_data}, ...}}

        # Install a tool (requires tool to be published)
        # Use stream() instead of activate_tx() to listen for agent restart
        install_tool_data = {
            "type": "install_tool",
            "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299",
            "variables": {"API_TOKEN": "your-token"}  # Optional: tool variables
        }
        await cell.stream(cell_id, install_tool_data)
        # Agent will restart and send "ping" when ready

        # Delete a tool
        delete_tool_data = {
            "type": "delete_tool",
            "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299"
        }
        await cell.stream(cell_id, delete_tool_data)
        # Agent will restart after deletion

        # ============================================
        # Example 6: Actions Audit Log
        # ============================================

        # Get all actions (audit log)
        get_actions_data = {"type": "get_actions"}
        actions = await cell.activate_tx(cell_id, get_actions_data)
        print(actions)
        # Returns list of actions with status, tool info, timestamps, etc.

        # ============================================
        # Example 7: Agent Status
        # ============================================

        # Check if agent is running
        status_data = {"type": "get_agent_status"}
        status = await cell.activate_tx(cell_id, status_data)
        print(status)  # Returns: {"json": "running"}

        # ============================================
        # Example 8: Receiving Requests (Server-side)
        # ============================================

        # Listen for incoming requests using sync()
        async for transmitter in cell.sync():
            data = transmitter.get("data", {})
            message_type = data.get("type")

            # Send encrypted response back to the client
            await cell.tx_response(
                transmitter_id=transmitter.get("transmitter_id"),
                data={"json": "Response message"},
                client_public_key_str=data.get("public_key", "")
            )

if __name__ == '__main__':
    asyncio.run(main())

Neuronum Tools CLI

Neuronum Tools are MCP-compliant (Model Context Protocol) plugins that can be installed on the Neuronum Server and extend your Agent's functionality, enabling it to interact with external data sources and your system.

Tools Note: Tools are not stored encrypted on neuronum.net. Do not include credentials, API keys, secure tokens, passwords, or any sensitive data directly in your tool code. Use environment variables or the variables configuration field (when available) to handle sensitive information securely.

Initialize a Tool

neuronum init-tool

You will be prompted to enter a tool name and description (e.g., "Test Tool" and "A simple test tool"). This will create a new folder named using the format: Tool Name_ToolID (e.g., Test Tool_019ac60e-cccc-7af5-b087-f6fcf1ba1299)

This folder will contain 2 files:

  1. tool.config - Configuration and metadata for your tool
  2. tool.py - Your Tool/MCP server implementation

Example tool.config:

{
  "tool_meta": {
    "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299",
    "version": "1.0.0",
    "name": "Test Tool",
    "description": "A simple test tool",
    "audience": "private",
    "logo": "https://neuronum.net/static/logo_new.png"
  },
  "legals": {
    "terms": "https://url_to_your/terms",
    "privacy_policy": "https://url_to_your/privacy_policy"
  },
  "requirements": [],
  "variables": []
}

Example tool.py:

from mcp.server.fastmcp import FastMCP

# Create server instance
mcp = FastMCP("simple-example")

@mcp.tool()
def echo(message: str) -> str:
    """Echo back a message"""
    return f"Echo: {message}"

if __name__ == "__main__":
    mcp.run()

Tool Configuration Fields

audience

  • Controls who can install and use your tool
  • Options:
    • "private" - Only you can use this tool
    • "public" - Anyone on the Neuronum network can install this tool
    • "id::cell" - Share with specific cells (comma-separated list)

Examples:

"audience": "private"
"audience": "public"
"audience": "acme::cell, community::cell, business::cell"

requirements

  • List of Python packages your tool needs
  • Automatically installed by the Neuronum Server when the tool is added
  • Use the same format as pip requirements (e.g., "requests", "pandas>=2.0.0")

Example:

"requirements": [
  "requests",
  "pandas>=2.0.0",
  "openai==1.12.0"
]

variables

  • List of variable names that users need to provide when installing your tool
  • When installing the tool, users are prompted to manually set each variable one by one
  • Values are sent encrypted to the server and automatically placed into your tool.py code
  • Important: You don't need to add lines like API_TOKEN = "value" to your tool.py - the server automatically sets these variables based on user inputs

Example in tool.config:

"variables": [
  "API_TOKEN",
  "DB_PASSWORD",
  "SERVICE_URL"
]

How to use variables in your tool.py:

Wrong - Don't hardcode sensitive values:

from mcp.server.fastmcp import FastMCP
import requests

mcp = FastMCP("api-tool")

# DON'T DO THIS - Never hardcode credentials!
API_TOKEN = "sk-1234567890abcdef"  # This will be exposed in your tool code!

@mcp.tool()
def call_api(endpoint: str) -> str:
    """Call external API"""
    response = requests.get(f"https://api.example.com/{endpoint}",
                           headers={"Authorization": f"Bearer {API_TOKEN}"})
    return response.text

Correct - Use variables (server auto-injects values):

First, declare the variable in your tool.config:

{
  ...
  "requirements": ["requests"],
  "variables": ["API_TOKEN"]
}

Then use it in your tool.py without defining it:

from mcp.server.fastmcp import FastMCP
import requests

mcp = FastMCP("api-tool")

# The server automatically sets API_TOKEN based on user input during installation
# You just use it directly - no need to define it!

@mcp.tool()
def call_api(endpoint: str) -> str:
    """Call external API"""
    response = requests.get(f"https://api.example.com/{endpoint}",
                           headers={"Authorization": f"Bearer {API_TOKEN}"})
    return response.text

Note: This feature is only available when using the official Neuronum client.

Update a Tool

After modifying your tool.config or tool.py files, submit the updates using:

neuronum update-tool

Delete a Tool

neuronum delete-tool

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuronum-2026.1.0.dev1.tar.gz (24.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuronum-2026.1.0.dev1-py3-none-any.whl (20.6 kB view details)

Uploaded Python 3

File details

Details for the file neuronum-2026.1.0.dev1.tar.gz.

File metadata

  • Download URL: neuronum-2026.1.0.dev1.tar.gz
  • Upload date:
  • Size: 24.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for neuronum-2026.1.0.dev1.tar.gz
Algorithm Hash digest
SHA256 7a15d3a198a2b9310553d9848b95986f3906208fa73ef6079799dde71f116049
MD5 851053065fc55f474a7e825768ba0ce9
BLAKE2b-256 91898e3f39496558afc54d46a0767ba26b967559a942adf803229ce00118d793

See more details on using hashes here.

File details

Details for the file neuronum-2026.1.0.dev1-py3-none-any.whl.

File metadata

File hashes

Hashes for neuronum-2026.1.0.dev1-py3-none-any.whl
Algorithm Hash digest
SHA256 2e85502759c7cf3884bad4b6f0093a075532d9dce56629acc0c48791eea24380
MD5 79125e6a26aba15b0b64d8f8b9f6c68c
BLAKE2b-256 19760c109bbc9c42fa07eec9ae0eb8afd8a2a81dd342614e2d672ced82d065e1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page