Skip to main content

AI chat client with tool use, agent tools, and persistent memory.

Project description

Turnstone

Multi-node AI orchestration platform. Deploy tool-using AI agents across a cluster of servers, driven by message queues or interactive interfaces.

Named after the Ruddy Turnstone — a bird that flips rocks to expose what's hiding underneath.

What it does

Turnstone gives LLMs tools — shell, files, search, web, planning — and orchestrates multi-turn conversations where the model investigates, acts, and reports. It runs as:

  • Interactive sessions — terminal CLI or browser UI with parallel workstreams
  • Queue-driven agents — trigger workstreams via message queue, stream progress, approve or auto-approve tool use
  • Multi-node clusters — generic work load-balances across nodes, directed work routes to a specific server
  • Cluster dashboard — real-time view of all nodes, workstreams, and resource utilization
  • Cluster simulator — test the stack at scale (up to 1000 nodes) without an LLM backend
External System → Message Queue → Bridge (per node) → Turnstone Server → LLM + Tools
                                      ↓
                                 Pub/Sub → Progress Events → External System
                                      ↓
                                 turnstone-console → Cluster Dashboard (browser)

Quickstart

Interactive (terminal)

pip install turnstone
turnstone --base-url http://localhost:8000/v1

Interactive (browser)

turnstone-server --port 8080 --base-url http://localhost:8000/v1

Queue-driven (programmatic)

pip install turnstone[mq]
turnstone-bridge --server-url http://localhost:8080 --redis-host localhost
from turnstone.mq import TurnstoneClient

with TurnstoneClient() as client:
    # Generic — any available node picks it up
    result = client.send_and_wait("Analyze the error logs", auto_approve=True)
    print(result.content)

    # Directed — must run on a specific server
    result = client.send_and_wait(
        "Check disk I/O on this server",
        target_node="server-12",
        auto_approve=True,
    )

Cluster dashboard

pip install turnstone[console]
turnstone-console --redis-host localhost --port 8090

Then open http://localhost:8090 for the cluster-wide dashboard.

Docker

cp .env.example .env  # edit LLM_BASE_URL, OPENAI_API_KEY, etc.
docker compose up     # starts redis + server + bridge + console

Console dashboard at http://localhost:8090. See docs/docker.md for configuration, scaling, and profiles.

Simulator

Test the multi-node stack at scale without an LLM backend:

docker compose --profile sim up redis console sim

Or standalone:

pip install turnstone[sim]
turnstone-sim --nodes 100 --scenario steady --duration 60 --mps 10

See docs/simulator.md for scenarios, CLI reference, and metrics.

All frontends connect to any OpenAI-compatible API (vLLM, NVIDIA NIM/NGC, llama.cpp, OpenAI, etc.) and auto-detect the model.

Architecture

turnstone/
├── core/              # UI-agnostic engine
│   ├── session.py     # ChatSession — multi-turn loop, tool dispatch, agents
│   ├── tools.py       # Tool definitions (auto-loaded from JSON)
│   ├── workstream.py  # WorkstreamManager — parallel independent sessions
│   ├── config.py      # Unified TOML config (~/.config/turnstone/config.toml)
│   ├── memory.py      # SQLite persistence (memories, conversations, FTS5)
│   ├── metrics.py     # Prometheus-compatible metrics collector
│   ├── edit.py        # File editing (fuzzy match, indentation)
│   ├── safety.py      # Path validation, sandbox checks
│   ├── sandbox.py     # Command sandboxing
│   └── web.py         # Web fetch/search helpers
├── mq/                # Message queue integration
│   ├── protocol.py    # Typed message dataclasses (JSON serialization)
│   ├── broker.py      # Abstract MessageBroker + RedisBroker
│   ├── bridge.py      # Bridge service (queue ↔ HTTP API, multi-node routing)
│   └── client.py      # TurnstoneClient — Python API for external systems
├── console/           # Cluster dashboard
│   ├── collector.py   # ClusterCollector — aggregates all nodes via Redis + HTTP
│   ├── server.py      # Dashboard HTTP server + SSE
│   └── static/        # Cluster dashboard web UI
├── tools/             # Tool schemas (one JSON file per tool)
├── ui/                # Frontend assets and terminal rendering
│   └── static/        # Web UI (HTML, CSS, JS)
├── sim/               # Cluster simulator
│   ├── cluster.py     # SimCluster — orchestrates N nodes + dispatchers
│   ├── node.py        # SimNode + SimWorkstream — protocol-compatible node
│   ├── engine.py      # LLM + tool execution simulation
│   ├── scenario.py    # 5 workload scenarios (steady, burst, node_failure, …)
│   ├── metrics.py     # Latency, throughput, utilization collection
│   └── cli.py         # CLI entry point (turnstone-sim)
├── cli.py             # Terminal frontend (+ /cluster commands for console)
├── server.py          # Web frontend (HTTP + SSE)
└── eval.py            # Evaluation and prompt optimization harness
docs/
├── architecture.md    # System architecture and threading model
├── api-reference.md   # Web server API and SSE event reference
├── console.md         # Cluster dashboard service (turnstone-console)
├── docker.md          # Docker Compose deployment and configuration
├── simulator.md       # Cluster simulator usage and scenarios
├── tools.md           # Tool schemas, execution pipeline, approval flow
└── eval.md            # Evaluation harness internals

Multi-node routing

Each Turnstone server runs a bridge process. Bridges share a Redis instance for coordination:

Redis Key Purpose
turnstone:inbound Shared work queue — generic tasks, any node
turnstone:inbound:{node_id} Per-node queue — directed tasks
turnstone:ws:{ws_id} Workstream ownership — auto-routes follow-ups
turnstone:node:{node_id} Node heartbeat + metadata for discovery
turnstone:events:{ws_id} Per-workstream event pub/sub
turnstone:events:global Global event pub/sub
turnstone:events:cluster Cluster-wide state changes (for turnstone-console)

Routing rules:

  1. Message has target_node → routes to that node's queue
  2. Message has ws_id → looks up owner, routes to owning node
  3. Neither → shared queue, next available bridge picks it up

Bridges BLPOP from their per-node queue (priority) then the shared queue. Directed work always takes precedence.

Tools

14 built-in tools, 2 agent tools:

Tool Description Auto-approved
bash Execute shell commands
read_file Read file contents yes
write_file Write/create files
edit_file Fuzzy-match file editing
search Search files by name/content yes
math Sandboxed Python evaluation
man Read man pages yes
web_fetch Fetch URL content
web_search Search via Tavily API
remember Save persistent facts yes
recall Search memories and history yes
forget Remove a memory yes
task Spawn autonomous sub-agent
plan Explore codebase, write .plan.md

Configuration

All entry points read ~/.config/turnstone/config.toml. CLI flags override config values.

[api]
base_url = "http://localhost:8000/v1"
api_key = ""
tavily_key = ""

[model]
name = ""              # empty = auto-detect
temperature = 0.5
reasoning_effort = "medium"

[tools]
timeout = 30
skip_permissions = false

[server]
host = "0.0.0.0"
port = 8080

[redis]
host = "localhost"
port = 6379
password = ""

[bridge]
server_url = "http://localhost:8080"
node_id = ""           # empty = hostname_xxxx

[console]
host = "0.0.0.0"
port = 8090
url = "http://localhost:8090"  # used by CLI /cluster commands
poll_interval = 10

Precedence: CLI args > environment variables > config.toml > defaults.

Workstreams

Parallel independent conversations, each with its own session and state:

Symbol State Meaning
· idle Waiting for input
thinking Model is generating
running Tool execution in progress
attention Waiting for approval
error Something went wrong

Idle workstreams are automatically cleaned up after 2 hours (configurable). In multi-node deployments, workstream ownership is tracked in Redis — follow-up messages auto-route to the owning node.

Monitoring

/metrics endpoint exposes Prometheus-format metrics:

  • turnstone_tokens_total{direction} — prompt/completion token counters
  • turnstone_tool_calls_total{tool} — per-tool invocation counts
  • turnstone_workstream_context_ratio{ws_id} — per-workstream context utilization
  • turnstone_http_request_duration_seconds — request latency histogram
  • turnstone_workstreams_by_state{state} — workstream state gauges

Per-workstream metrics are labeled by ws_id (bounded to 10 max workstreams).

Requirements

  • Python 3.11+
  • An OpenAI-compatible API endpoint (vLLM, NVIDIA NIM, llama.cpp, etc.)
  • Redis (for message queue bridge — pip install turnstone[mq])

License

Business Source License 1.1 — free for all use except hosting as a managed service. Converts to Apache 2.0 on 2030-03-01.

Project details


Release history Release notifications | RSS feed

This version

0.2.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

turnstone-0.2.0.tar.gz (211.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

turnstone-0.2.0-py3-none-any.whl (160.1 kB view details)

Uploaded Python 3

File details

Details for the file turnstone-0.2.0.tar.gz.

File metadata

  • Download URL: turnstone-0.2.0.tar.gz
  • Upload date:
  • Size: 211.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for turnstone-0.2.0.tar.gz
Algorithm Hash digest
SHA256 198d8c45c3f49351c11bcd3110731de35a5f33d77f768391cf89ca4b4f7e9b36
MD5 53eb93bfdc6994875d8f560f16f3597e
BLAKE2b-256 83dd5b99b4260e9a99a049972f0ece0fdd45e7ec7e0773ee5f666de5410a19e6

See more details on using hashes here.

File details

Details for the file turnstone-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: turnstone-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 160.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for turnstone-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4274efa58880b1242a690035cccf8f8a1b5e12c4ad1e550343d2f0a48679468f
MD5 d1e6a74fc20aa35e504840c31d66a912
BLAKE2b-256 e9a5984e3c2870e2caab9469719f29ea87b713dd5cee47ca430a1b7072321aeb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page