Skip to main content

C++ graph agent engine — Python bindings (LangGraph-equivalent semantics, microsecond engine overhead)

Project description

NeoGraph

A C++ Graph Agent Engine — with Python bindings
Microsecond tail latency under 10k concurrent requests on 512 MB.
LangGraph's semantics, without the Python runtime tax — and now reachable from Python too.

PyPI Python versions License

Concepts · Quick Start · Python Binding · C++ Examples · Python Examples · Troubleshooting · API Reference · Doxygen · vs LangGraph · Benchmarks


What is NeoGraph?

NeoGraph is a C++17 graph-based agent orchestration engine that brings LangGraph-level capabilities to C++. Define agent workflows as JSON, execute them with parallel fan-out, checkpoint state for time-travel debugging, and integrate any LLM provider — all without Python.

#include <neograph/neograph.h>
#include <neograph/llm/openai_provider.h>
#include <neograph/graph/react_graph.h>

auto provider = neograph::llm::OpenAIProvider::create({
    .api_key = "sk-...", .default_model = "gpt-4o-mini"
});
auto engine = neograph::graph::create_react_graph(provider, std::move(tools));

neograph::graph::RunConfig config;
config.input = {{"messages", json::array({{{"role","user"},{"content","Hello!"}}})}};
auto result = engine->run(config);

Why NeoGraph?

Python + LangGraph C++ + NeoGraph (measured)
~500 MB runtime (Python + deps) 1.1 MB static binary (stripped, example_plan_executor)
~300 MB steady RSS 2.9 MB peak RSS (Plan & Executor run)
2–8 s import / cold start < 250 ms end-to-end (crash + resume cycle included)
GIL-limited parallelism asio::thread_pool fan-out + lock-free RequestQueue
Cloud / server only Raspberry Pi Zero 2W, Jetson, drones, IoT, edge

All figures are from example_plan_executor on x86_64 Linux built with CMAKE_BUILD_TYPE=MinSizeRel, -ffunction-sections -fdata-sections, -static-libstdc++ -static-libgcc -Wl,--gc-sections, then stripped. Only runtime dependency is libc.so.6. See the Benchmarks section below for the reproduction command.

NeoGraph is the only graph agent engine for C++. If you're building agents in robotics, embedded systems, games, high-frequency trading, or anywhere Python isn't an option — this is it.

Python Binding

NeoGraph also ships as a pip-installable Python package, so the same C++ engine can drive a LangGraph-style workflow from a Jupyter notebook, a Gradio app, or a FastAPI service:

pip install neograph-engine

Five-second demo (no API key)

The shortest thing that proves the install worked — one decorator-defined node, run it, read the output:

import neograph_engine as ng

@ng.node("greet")
def greet(state):
    return [ng.ChannelWrite("messages",
        [{"role": "assistant", "content": f"Hello, {state.get('name')}!"}])]

definition = {
    "name": "demo",
    "channels": {"name":     {"reducer": "overwrite"},
                 "messages": {"reducer": "append"}},
    "nodes":    {"greet": {"type": "greet"}},
    "edges":    [{"from": ng.START_NODE, "to": "greet"},
                 {"from": "greet",       "to": ng.END_NODE}],
}

engine = ng.GraphEngine.compile(definition, ng.NodeContext())
result = engine.run(ng.RunConfig(thread_id="t1", input={"name": "NeoGraph"}))

print(result.output["channels"]["messages"]["value"])
# [{'role': 'assistant', 'content': 'Hello, NeoGraph!'}]

ReAct agent with a real LLM

import neograph_engine as ng
from neograph_engine.llm import OpenAIProvider

class CalcTool(ng.Tool):
    def get_name(self):       return "calc"
    def get_definition(self): return ng.ChatTool(name="calc", description="multiply by 2",
        parameters={"type":"object","properties":{"x":{"type":"number"}}})
    def execute(self, args):  return str(args["x"] * 2)

ctx = ng.NodeContext(
    provider=OpenAIProvider(api_key="sk-..."),
    tools=[CalcTool()],
    instructions="Use `calc` for arithmetic.",
)

definition = {
    "name": "react",
    "channels": {"messages": {"reducer": "append"}},
    "nodes":    {"llm": {"type": "llm_call"}, "dispatch": {"type": "tool_dispatch"}},
    "edges":    [{"from": ng.START_NODE, "to": "llm"}, {"from": "dispatch", "to": "llm"}],
    "conditional_edges": [{"from": "llm", "condition": "has_tool_calls",
                           "routes": {"true": "dispatch", "false": ng.END_NODE}}],
}
engine = ng.GraphEngine.compile(definition, ctx)
result = engine.run(ng.RunConfig(thread_id="t1",
    input={"messages": [{"role": "user", "content": "What is 21 * 2?"}]},
    max_steps=10))

Reading the output

engine.run(...) returns a RunResult with these fields:

Field Type Meaning
output dict Final state — {"channels": {...}, "global_version": int}. Use output["channels"][name]["value"] to read a channel.
interrupted bool True if the run paused at an interrupt_before / interrupt_after / NodeInterrupt.
interrupt_node str Name of the node that triggered the interrupt (when interrupted).
interrupt_value dict Diagnostic payload — {"reason": ...} or {"message": ...}.
checkpoint_id str ID of the latest checkpoint saved during the run. Pass to engine.resume_async(checkpoint_id=...) to continue.
execution_trace list[str] Node names in the order they executed — useful for debugging routing.

RunConfig mirrors the LangGraph RunnableConfig idea:

Field Default Meaning
thread_id required Conversation / session identifier — keeps checkpoint streams separate.
input {} Initial channel values — keys must match the graph's channels definition.
max_steps 25 Super-step ceiling; ReAct loops typically need 10+.
stream_mode StreamMode.OFF Bitmask: EVENTS | TOKENS | DEBUG | VALUES | UPDATES | ALL. Only consulted by run_stream / run_stream_async.

Built-in reducers

Channels need a reducer — how new writes combine with existing values. Two built-ins ship today:

Reducer Behavior Typical use
"overwrite" New value replaces old. Single-value channels: name, current_question, intermediate scratch.
"append" New list concatenated to existing list. Conversation history, intermediate results, anything you want to accumulate across nodes.

Custom reducers register from Python (since v0.1.9):

ng.ReducerRegistry.register_reducer("sum",
    lambda current, incoming: (current or 0) + incoming)

# Now `"reducer": "sum"` works in your channel definitions.

Same pattern for conditional routing — ng.ConditionRegistry.register_condition("name", fn) where fn(state) -> str returns one of the route keys.

What's covered by the binding

  • Engine surfaceGraphEngine.compile / run / run_stream / run_async / run_stream_async / resume_async / get_state / update_state / fork, RunConfig, RunResult, set_worker_count, set_checkpoint_store, set_node_cache_enabled.
  • Custom Python nodes — subclass neograph_engine.GraphNode, register via NodeFactory.register_type or the @neograph_engine.node decorator. Engine dispatches under proper GIL handling, including from fan-out worker threads.
  • Custom Python tools — subclass neograph_engine.Tool, pass into NodeContext(tools=[...]). Engine takes ownership at compile time.
  • Async — every *_async binding returns an asyncio.Future bound to the calling thread's running loop. Stream callbacks are hopped to the loop thread via loop.call_soon_threadsafe so callbacks run where asyncio expects.
  • CheckpointsInMemoryCheckpointStore always; PostgresCheckpointStore when the binding is built from source with -DNEOGRAPH_BUILD_POSTGRES=ON (libpq bundling for the PyPI wheel is pending).
  • OpenAI Responses over WebSocketSchemaProvider(schema="openai_responses", use_websocket=True).

Wheels: Linux x86_64 (manylinux_2_34), Linux aarch64 (manylinux_2_34), macOS arm64 (14+), Windows x64 (MSVC), for Python 3.9 → 3.13. **20 wheels

  • sdist per release** via cibuildwheel.

See bindings/python/examples/ for the full example index — minimal graph, ReAct, HITL, intent routing, async, multi-agent debate, JSON graph round-trip, and a Gradio chat with a deep-research subgraph (Crawl4AI + Postgres optional).

The agent runtime that fits in L3 cache

NeoGraph's hot code path is small enough that N concurrent agents share one L3-resident working set. We measured this with Valgrind cachegrind on a Ryzen 7 5800X (Zen 3: 32 KB L1i/d 8-way, 32 MB L3 16-way), sweeping N = 1 → 10,000 concurrent requests through benchmarks/concurrent/bench_concurrent_neograph:

N I refs L3 instruction misses L3i miss rate Native p50
1 5.3 M 4,313 0.08% 17 µs
10 5.9 M 4,304 0.07% 16 µs
100 11.8 M 4,320 0.04% 6 µs
1,000 69.7 M 4,327 0.01% 6 µs
10,000 648 M 4,329 0.00% 5 µs

L3 instruction misses stay flat at ~4,320 across four orders of magnitude of N. The unique hot code working set is roughly 4,330 × 64 B = 277 KB0.85 % of the 32 MB L3. At N = 10,000 we processed 648 million instructions and only 4,329 of them reached DRAM (≈ 1 miss per 150,000 instructions).

Native per-request latency drops from 17 µs (cold) to 5 µs (warm) as N grows — the 3.4× improvement is pure I-cache warming. Throughput at N = 10,000 is ~1.1 M req/s on the single thread pool, with 5.2 MB peak RSS (≈ 100 B / agent marginal cost).

Why this matters: DRAM access on Zen 3 is ~250 cycles vs ~46 for an L3 hit — roughly 5.5× slower per access. If NeoGraph's working set had overflowed L3 (as Python interpreters + dict-heavy state typically do), the same N = 10,000 sweep would have paid +420 to +840 ms in memory stalls instead of the measured 9 ms total wall time — 47–94× slower depending on how much of the miss chain reaches DRAM. The whole L3 stays available for your workload (conversation history, embeddings, tool responses): the engine itself is a rounding error.

Reproduce:

g++ -std=c++20 -O2 -DNDEBUG -Iinclude -Ideps -Ideps/yyjson -Ideps/asio/include \
    -DASIO_STANDALONE benchmarks/concurrent/bench_concurrent_neograph.cpp \
    build-release/libneograph_core.a build-release/libyyjson.a -pthread -o bench_ng

valgrind --tool=cachegrind --cache-sim=yes \
    --I1=32768,8,64 --D1=32768,8,64 --LL=33554432,16,64 ./bench_ng 10000

Holds end-to-end with a real LLM in the loop

The L3 story survives full-stack production: we point NeoGraph at a locally-hosted Gemma-4 E2B (Q4_K_M, 4.65 B params, 2.9 GB GGUF) served by TransformerCPP's OpenAI-compatible HTTP endpoint — zero NeoGraph code changes, just OpenAIProvider::Config::base_url = "http://localhost:8090". See examples/31_local_transformer.cpp.

Pure NeoGraph NeoGraph + local Gemma (HTTP)
L3 instruction misses 4,320 7,262
Hot code working set 277 KB 465 KB (1.42% of L3)
Per-request TTFT 25–27 ms (curl baseline 9–10 ms → ~15 ms NeoGraph overhead)
Per-request total 146–213 ms @ 19–27 tokens (~130 tok/s)
NeoGraph agent RSS 5.2 MB 7.6 MB (+2.4 MB for httplib + JSON streaming)
Gemma server RSS n/a 2.45 GB (mmap GGUF)
VRAM (RTX 4070 Ti) n/a 3.06 GB

The inference process lives in a separate address space, so its 2.5 GB of model weights never touch NeoGraph's L3 cache lines. The agent's 465 KB working set stays L3-resident regardless of how large the model is. That's the architectural payoff of the two-process split: you can swap in a 70 B model without inflating the agent.

Burst-tested with 5 concurrent NeoGraph agents against the same server: aggregate wall 1.58 s / 5 requests (2.65× speedup from coroutine overlap). Per-agent throughput drops under queue pressure because the Gemma server doesn't implement continuous batching — that's a TransformerCPP concern, not an agent one. NeoGraph dispatched all 5 cleanly with no resource pressure and the RSS stayed flat at ~7 MB.

Quick Start

Requirements

  • C++20 compiler — coroutines are on the public API surface as of 2.0.0. Verified toolchains:
    • GCC 13.3 — core + all tests green. The OpenAI Responses built-in-tools demo (example_openai_responses_ws_tools) is skipped because GCC 13 trips a coroutine-cleanup ICE (build_special_member_call at cp/call.cc:11096); the rest of the project is unaffected and the skip is automatic.
    • GCC 14.2+ — everything including the tools demo.
    • Clang 18+ — everything including the tools demo.
    • MSVC 2022 — core builds + non-Postgres tests in CI; runtime not yet load-tested.
  • CMake 3.16+.
  • OpenSSL (HTTPS), libpq (optional, Postgres checkpoint), SQLite3 (optional, SQLite checkpoint).

Platform support

Platform Tier Notes
Linux x86_64 (Ubuntu 24.04, GCC 13) GA Reference — 356/356 ctest green, 310/310 ASan+UBSan, Valgrind clean on coroutine subset
macOS (Apple Silicon, Clang) beta CI builds + non-Postgres tests; runtime differences (coroutine scheduling, SIGPIPE) not yet exercised in production
Linux ARM64 (Ubuntu 24.04, GCC 13) alpha 306/306 ctest green via docker buildx --platform linux/arm64 under QEMU emulation — see Dockerfile.arm64-smoke. Native ARM64 hardware validation pending (Raspberry Pi, Graviton, Apple Silicon Linux). Stripped binary 0.81–0.88 MB.
Windows (MSVC 2022, x64) alpha CI builds + non-Postgres tests; MCP stdio (named-pipe overlapped) + PG async socket wrap written against MSDN spec but unvalidated under load

CI matrix (GitHub Actions): build-and-test (Ubuntu, full with PG service), build-macos, build-windows, bench-regression (3 committed floors). See CHANGELOG.md for the full stability rationale per platform.

Build

git clone https://github.com/fox1245/NeoGraph.git
cd NeoGraph
mkdir build && cd build
cmake ..
make -j$(nproc)

Run an example (no API key needed)

./example_custom_graph      # Mock ReAct agent
./example_parallel_fanout   # Parallel fan-out/fan-in (3 researchers run concurrently)
./example_send_command      # Dynamic Send + Command routing

Integration

FetchContent (recommended):

include(FetchContent)
FetchContent_Declare(neograph
  GIT_REPOSITORY https://github.com/fox1245/NeoGraph.git
  GIT_TAG main)
FetchContent_MakeAvailable(neograph)

target_link_libraries(my_app PRIVATE neograph::core neograph::llm)

add_subdirectory:

add_subdirectory(deps/neograph)
target_link_libraries(my_app PRIVATE neograph::core neograph::llm)

Features

Core Engine (neograph::core)

  • JSON-defined graphs — No recompilation to change agent workflows
  • Super-step execution — Pregel BSP model with cycle support
  • Parallel fan-out/fan-inasio::experimental::make_parallel_group on the engine's executor; opt-in asio::thread_pool for CPU-bound branches via set_worker_count(N)
  • Send (dynamic fan-out) — Nodes spawn N parallel tasks at runtime
  • Command (routing override) — Nodes control routing + state in one return
  • Checkpointing — Full state snapshots at every super-step
  • HITL (Human-in-the-Loop)interrupt_before / interrupt_after + resume()
  • State managementget_state(), update_state(), fork(), time-travel
  • Dynamic breakpointsthrow NodeInterrupt("reason") from any node
  • Retry policies — Per-node exponential backoff with configurable limits
  • Stream modesEVENTS | TOKENS | VALUES | UPDATES | DEBUG bitflags
  • Subgraphs — Hierarchical composition via JSON (Supervisor pattern)
  • Intent routing — LLM-based classification + dynamic routing
  • Cross-thread Store — Namespace-based shared memory across threads
  • Custom nodes — Register via NodeFactory with zero framework changes

LLM Providers (neograph::llm)

  • OpenAIProvider — OpenAI, Groq, Together, vLLM, Ollama (any OpenAI-compatible API)
  • SchemaProvider — Claude, Gemini, and any custom provider via JSON schema
  • Built-in schemas"openai", "claude", "gemini" embedded at build time
  • Agent — ReAct loop with streaming support

MCP Client (neograph::mcp)

  • HTTP transport — JSON-RPC 2.0 over Streamable HTTP, session-aware
  • stdio transportMCPClient({"python", "server.py"}) spawns the MCP server as a child subprocess and exchanges newline-delimited JSON-RPC over its stdin / stdout; subprocess lifetime is tied to the last MCPTool that references it
  • Tool discoveryget_tools() auto-discovers tools from either transport; returned MCPTools plug straight into Agent / GraphEngine

Utilities (neograph::util)

  • RequestQueue — Lock-free worker pool with backpressure (moodycamel::ConcurrentQueue)

Examples

# Example Description API Key
01 react_agent Basic ReAct agent with calculator tool Required
02 custom_graph JSON-defined graph with mock provider No
03 mcp_agent Real MCP server tool integration Required
04 checkpoint_hitl Checkpointing + Human-in-the-Loop (interrupt/resume) No
05 parallel_fanout Parallel fan-out/fan-in via make_parallel_group (3 workers) No
06 subgraph Hierarchical graph composition (Supervisor pattern) No
07 intent_routing Intent classification + expert routing No
08 state_management get_state / update_state / fork / time-travel No
09 all_features All 6 advanced features in one demo No
10 send_command Dynamic Send fan-out + Command routing override No
11 clay_chatbot Multi-turn chatbot UI (Clay + Raylib) Optional
12 rag_agent RAG agent with in-memory vector search (CLI) Required (OpenAI)
13 openai_responses ReAct via OpenAI /v1/responses through SchemaProvider Required (OpenAI)
14 plan_executor Plan & Executor: 5-way Send + crash/resume via pending_writes No
15 reflexion Self-critique loop until acceptance (Anthropic) Required (Anthropic)
16 tree_of_thoughts BFS over LLM thought branches, top-k pruning Required (Anthropic)
17 self_ask Follow-up decomposition across multiple hops Required (Anthropic)
18 multi_agent_debate Proponent / opponent / judge pattern Required (Anthropic)
19 rewoo Reasoning WithOut Observation — plan once, fan out, synthesize Required (Anthropic)
20 mcp_hitl MCP + checkpoint HITL (interrupt_before tool dispatch, resume after approval) Required (OpenAI)
21 mcp_fanout Parallel MCP tool calls via Send fan-out inside one super-step No
22 mcp_stdio MCP over stdio transport — subprocess MCP server spawned by the client Required (OpenAI)
23 mcp_multi One agent routing tools across two MCP servers (HTTP + stdio) Required (OpenAI)
24 mcp_feedback Human-feedback loop — draft answer, operator pushes back, agent revises Required (OpenAI)
25 deep_research open_deep_research-style multi-step web research loop (Crawl4AI + Anthropic) Required (Anthropic)
26 postgres_react_hitl ReAct + Postgres-backed checkpoint HITL — survives process restart Required (Anthropic + Postgres)
27 async_concurrent_runs Hosting many concurrent agent runs on one shared asio::io_context No
28 corrective_rag Corrective RAG (arXiv:2401.15884) — retrieve → evaluator routes to refine / web / both → generate, all over /v1/responses Required (OpenAI)
29 responses_envelope Wire-level dump of /v1/responses's output[] envelope — debug/pedagogy aid for understanding tool-calling shape before SchemaProvider flattens it Required (OpenAI)
30 reasoning_effort Same prompt at reasoning.effort ∈ {none, low, medium, high} on a reasoning model — compares wall, hidden-CoT tokens, and answer Required (OpenAI, reasoning model)

Every API-using example above auto-loads .env from the cwd or any parent directory via the bundled cppdotenv, so the recipe is just echo 'OPENAI_API_KEY=...' > .env && ./example_*. Process-environment values still take precedence if both are set.

Run with a real LLM

# Set your API key (auto-loaded by every API-using example via cppdotenv)
echo "OPENAI_API_KEY=sk-..." > .env

# ReAct agent with OpenAI
./example_react_agent

# MCP agent (start demo server first: python examples/demo_mcp_server.py)
./example_mcp_agent http://localhost:8000 "What time is it?"

# Visual chatbot
cmake .. -DNEOGRAPH_BUILD_CLAY_EXAMPLE=ON && make example_clay_chatbot
./example_clay_chatbot --live

Architecture

NeoGraph architecture — core / llm / mcp / util with internal class breakdown

GraphEngine is a thin super-step orchestrator that delegates to four purpose-built classes extracted in the 0.1 refactor:

  • GraphCompiler — pure JSON → CompiledGraph parser.
  • Scheduler — signal-dispatch routing plus barrier accumulation.
  • NodeExecutor — retry loop (async-native with timer-based backoff), parallel fan-out via asio::experimental::make_parallel_group, Send dispatch.
  • CheckpointCoordinator — save / resume / pending-writes lifecycle behind a (store, thread_id) façade.

Each class has a dedicated unit-test suite so engine behaviour is verifiable without spinning up a full run. See docs/reference-en.md §7b for the full API surface.

Dependency Isolation

Link target What gets pulled in
neograph::core yyjson (compiled, bundled), asio (header-only, standalone)
neograph::core + llm + OpenSSL (httplib stays PRIVATE)
neograph::core + mcp + OpenSSL (httplib stays PRIVATE)
neograph::util + moodycamel::ConcurrentQueue (header-only)

httplib is never exposed to your code. core has zero network dependencies. Taskflow was removed in 3.0 — parallel fan-out now runs on asio's coroutine primitives (see Features).

Concurrency & Async

NeoGraph supports two concurrency models out of the box — pick the one that fits your hosting pattern:

  • Thread-per-agent (sync)run() / run_stream() / resume() dispatched onto any executor you already use. Safe up to roughly a thousand concurrent agents; ~5 µs engine overhead per call on a Release -O3 -DNDEBUG build (the super-step loop routes through run_sync(execute_graph_async) so both entry points share one coroutine path). Detailed below.
  • Coroutine-based asyncrun_async() / run_stream_async() / resume_async() returning asio::awaitable<RunResult>. One asio::io_context hosts thousands of concurrent agents without a thread per run; all Provider / MCP / checkpoint I/O points are non-blocking co_await under the hood. Short intro below; full migration guide in docs/ASYNC_GUIDE.md.

Async (Stage 3)

#include <asio/co_spawn.hpp>
#include <asio/detached.hpp>
#include <asio/io_context.hpp>

asio::io_context io;
for (const auto& user : users) {
    asio::co_spawn(
        io,
        [&, user]() -> asio::awaitable<void> {
            RunConfig cfg;
            cfg.thread_id = user.session_id;
            cfg.input     = {{"messages", user.history}};
            auto result = co_await engine->run_async(cfg);
            handle(result);
        },
        asio::detached);
}
io.run();  // drives all agents on this thread

Stage 4 reality: engine->run_async() stays on the caller's executor end-to-end — every super-step suspension point (node dispatch, checkpoint I/O, parallel fan-out, retry backoff) is a real co_await. The three 50 ms steps above therefore overlap on one io_context thread and the wall time lands at ~50 ms, not 3 × 50 ms. One thread, N concurrent agents. For CPU-bound fan-out across cores, switch the driver to a shared asio::thread_pool — that's the pattern in benchmarks/concurrent/CONCURRENT.md where N = 10,000 finishes in 52 ms. Within a single run, the make_parallel_group fan-out overlaps too: three parallel-fanout researchers collapse from 370 ms sequential to 150 ms.

Custom nodes join the async path by overriding execute_async instead of execute:

class FetchNode : public GraphNode {
  public:
    asio::awaitable<std::vector<ChannelWrite>>
    execute_async(const GraphState& state) override {
        auto ex = co_await asio::this_coro::executor;
        auto res = co_await neograph::async::async_post(ex, /*...*/);
        co_return std::vector<ChannelWrite>{/*...*/};
    }
    std::string get_name() const override { return "fetch"; }
};

Async-shaped tools derive from AsyncTool:

class FetchTool : public neograph::AsyncTool {
  public:
    asio::awaitable<std::string>
    execute_async(const json& args) override { /* co_await HTTP */ }
    // sync execute() is final, routes through run_sync automatically.
};

See examples/27_async_concurrent_runs.cpp for the multi-agent pattern and examples/05_parallel_fanout.cpp for fan-out within one run.

Sync (thread-per-agent)

NeoGraph does not ship its own async runtime — it exposes synchronous run() / run_stream() / resume() and lets you pick the executor. A single compiled GraphEngine is safe to share across threads that invoke run() concurrently with distinct thread_ids, so hosting multi-tenant agent workloads is a matter of dispatching onto whatever executor you already use.

// One engine, many concurrent sessions — no external runtime required.
auto engine = GraphEngine::compile(def, ctx, std::make_shared<InMemoryCheckpointStore>());

std::vector<std::future<RunResult>> sessions;
for (const auto& user : users) {
    sessions.push_back(std::async(std::launch::async, [&engine, user]() {
        RunConfig cfg;
        cfg.thread_id = user.session_id;
        cfg.input = {{"messages", user.history}};
        return engine->run(cfg);
    }));
}
for (auto& f : sessions) handle(f.get());

Works the same way with an asio::thread_pool, a std::async-backed task system, or your web framework's worker pool — NeoGraph stays out of the executor decision. If you need CPU-parallel fan-out inside a single sync run() call (rather than N sync run()s on N threads), call engine->set_worker_count(N) once after compile() to install an engine-owned asio::thread_pool that run_parallel_async and the multi-Send branch dispatch onto.

Using the bundled RequestQueue

For multi-tenant servers that want a fixed worker pool with backpressure (rejecting new sessions when the queue is saturated instead of unbounded memory growth), link neograph::util and use the built-in lock-free queue — no external executor needed:

#include <neograph/util/request_queue.h>
using namespace neograph::util;

RequestQueue pool(16, 1000);           // 16 workers, max 1000 pending sessions
auto engine = GraphEngine::compile(def, ctx,
                                   std::make_shared<InMemoryCheckpointStore>());

std::vector<RunResult>          results(users.size());
std::vector<std::future<void>>  futs;

for (size_t i = 0; i < users.size(); ++i) {
    auto [accepted, fut] = pool.submit([&, i]() {
        RunConfig cfg;
        cfg.thread_id = users[i].session_id;
        cfg.input     = {{"messages", users[i].history}};
        results[i]    = engine->run(cfg);
    });
    if (!accepted) {
        // Backpressure: queue is full — shed load, return 503, retry later, …
        reject(users[i]);
        continue;
    }
    futs.push_back(std::move(fut));
}

for (auto& f : futs) f.get();           // propagates exceptions from run()

auto s = pool.stats();
log("pending={} active={} completed={} rejected={}",
    s.pending, s.active, s.completed, s.rejected);

submit() returns {accepted, std::future<void>}: capture the RunResult via a shared output slot (as above) or a per-task std::promise<RunResult>. The queue is backed by moodycamel::ConcurrentQueue (lock-free) and workers park on a condvar when idle — no busy-spin.

Rules for safe concurrent use:

  • Configuration mutators (set_retry_policy, set_checkpoint_store, set_store, own_tools, …) must be called before any concurrent run(). Treat the engine as frozen after the first dispatch.
  • Concurrent run() calls sharing the same thread_id do not crash but produce unspecified checkpoint interleaving. Serialize per-session access yourself if you need deterministic history.
  • Custom GraphNode subclasses must be stateless or self-synchronized. Node instances are owned by the engine and reused across every run on every thread — per-run scratch data belongs in graph channels, not in node member variables.
  • User-supplied CheckpointStore, Store, Provider, and Tool implementations must be thread-safe. The bundled InMemoryCheckpointStore and InMemoryStore already are.

Persistent checkpointing with PostgreSQL

For multi-process deployments or when checkpoints must survive a restart, link neograph::postgres and swap InMemoryCheckpointStore for PostgresCheckpointStore:

#include <neograph/graph/postgres_checkpoint.h>

auto store = std::make_shared<PostgresCheckpointStore>(
    "postgresql://user:pass@host:5432/dbname");
auto engine = GraphEngine::compile(def, ctx, store);

The schema mirrors LangGraph's PostgresSaver (three tables prefixed neograph_* to coexist with LangGraph state in the same database) and deduplicates channel values by (thread_id, channel, version). A 1000-step session that touches one channel per super-step costs roughly O(steps + channels) blob rows instead of O(steps × channels).

Build flag: -DNEOGRAPH_BUILD_POSTGRES=ON (default). Requires libpqxx-dev (apt) / libpqxx-devel (rpm). Set the flag OFF to skip the dependency entirely.

Running the integration tests: spin up a throwaway local PG and point the test binary at it:

docker run -d --rm --name neograph-pg-test \
    -e POSTGRES_PASSWORD=test -e POSTGRES_DB=neograph_test \
    -p 55432:5432 postgres:16-alpine

NEOGRAPH_TEST_POSTGRES_URL='postgresql://postgres:test@localhost:55432/neograph_test' \
    ctest --test-dir build -R PostgresCheckpoint --output-on-failure

Without the env var the 19 PG tests are GTEST_SKIP'd so the rest of the suite stays green on machines without a Postgres handy.

Coverage: tests/test_graph_engine.cpp contains ConcurrentRunDifferentThreadIds (16 threads × 25 runs = 400 parallel executions, validates per-session output + checkpoint isolation) and ConcurrentRunSameThreadIdNoCrash (8 threads × 50 runs on one shared thread_id, validates crash-free behavior).

JSON Graph Definition

{
  "name": "research_agent",
  "channels": {
    "messages": {"reducer": "append"},
    "findings": {"reducer": "append"},
    "__route__": {"reducer": "overwrite"}
  },
  "nodes": {
    "planner":    {"type": "llm_call"},
    "researcher": {"type": "tool_dispatch"},
    "classifier": {
      "type": "intent_classifier",
      "routes": ["deep_dive", "summarize"]
    },
    "inner_agent": {
      "type": "subgraph",
      "definition": { "...nested graph..." }
    }
  },
  "edges": [
    {"from": "__start__", "to": "planner"},
    {"from": "planner", "condition": "has_tool_calls",
     "routes": {"true": "researcher", "false": "classifier"}},
    {"from": "researcher", "to": "planner"},
    {"from": "classifier", "condition": "route_channel",
     "routes": {"deep_dive": "inner_agent", "summarize": "__end__"}}
  ],
  "interrupt_before": ["researcher"]
}

Comparison with LangGraph

Feature LangGraph (Python) NeoGraph (C++)
Graph engine StateGraph GraphEngine
Checkpointing MemorySaver + Postgres/SQLite/Redis CheckpointStore (interface) + InMemory + Postgres
HITL interrupt_before/after interrupt_before/after + NodeInterrupt
get_state / update_state Yes Yes
Fork Yes Yes
Time travel get_state_history get_state_history
Subgraphs CompiledGraph as node SubgraphNode (JSON inline)
Parallel fan-out Static make_parallel_group (+ opt-in asio::thread_pool)
Send (dynamic fan-out) Send() NodeResult::sends → parallel_group fan-out
Command (routing+state) Command(goto, update) NodeResult::command
Retry policy RetryPolicy RetryPolicy + exponential backoff
Stream modes values/updates/messages EVENTS/TOKENS/VALUES/UPDATES/DEBUG
Cross-thread Store Store (Postgres) Store (interface) + InMemory
Multi-LLM LangChain required SchemaProvider built-in (3 vendors)
MCP support None (separate impl) MCPClient built-in
Performance Python (GIL) C++20 coroutines + asio
Memory footprint ~300MB+ ~10MB
Edge/embedded Not possible Raspberry Pi, Jetson, IoT

Project Structure

NeoGraph/
├── include/neograph/
│   ├── neograph.h              # Convenience header
│   ├── types.h                 # ChatMessage, ToolCall, ChatCompletion
│   ├── provider.h              # Provider interface (abstract)
│   ├── tool.h                  # Tool interface (abstract)
│   ├── graph/
│   │   ├── types.h             # Channel, Edge, NodeContext, GraphEvent,
│   │   │                       # NodeInterrupt, Send, Command, RetryPolicy, StreamMode
│   │   ├── state.h             # GraphState (thread-safe channels)
│   │   ├── node.h              # GraphNode, LLMCallNode, ToolDispatchNode,
│   │   │                       # IntentClassifierNode, SubgraphNode
│   │   ├── engine.h            # GraphEngine, RunConfig, RunResult
│   │   ├── checkpoint.h        # CheckpointStore, InMemoryCheckpointStore
│   │   ├── store.h             # Store, InMemoryStore (cross-thread memory)
│   │   ├── loader.h            # NodeFactory, ReducerRegistry, ConditionRegistry
│   │   └── react_graph.h       # create_react_graph() convenience
│   ├── llm/
│   │   ├── openai_provider.h   # OpenAI-compatible provider
│   │   ├── schema_provider.h   # Multi-vendor LLM (JSON schema driven)
│   │   ├── agent.h             # ReAct agent loop
│   │   └── json_path.h         # JSON dot-path utilities
│   ├── mcp/
│   │   └── client.h            # MCP client + tool wrapper
│   └── util/
│       └── request_queue.h     # Lock-free worker pool
├── src/
│   ├── core/                   # 13 source files (engine + compiler/scheduler/executor/coordinator split)
│   ├── llm/                    # 3 source files
│   └── mcp/                    # 1 source file
├── schemas/                    # Built-in LLM provider schemas
│   ├── openai.json
│   ├── claude.json
│   └── gemini.json
├── deps/                       # Vendored dependencies
│   ├── yyjson/                 # Compiled C JSON library (yyjson.c + yyjson.h)
│   ├── asio/                   # Standalone asio (header-only, C++20 coroutines)
│   ├── httplib.h               # cpp-httplib (PRIVATE to llm/mcp)
│   ├── concurrentqueue.h       # moodycamel lock-free queue
│   ├── cppdotenv/              # .env loader (example 13)
│   ├── clay.h                  # Clay UI layout
│   └── clay_renderer_raylib.c  # Clay + raylib renderer glue (example 11)
├── benchmarks/                 # NeoGraph vs LangGraph engine-overhead bench
├── examples/                   # 18 runnable examples + Clay chatbot
└── scripts/
    └── embed_schemas.py        # Build-time schema embedding

CMake Targets

Target Description Dependencies
neograph::core Graph engine + types yyjson (bundled), asio (header-only), Threads
neograph::llm LLM providers + Agent core + OpenSSL (httplib PRIVATE)
neograph::mcp MCP client core + OpenSSL (httplib PRIVATE)
neograph::util RequestQueue core + concurrentqueue
neograph::async asio HTTP/SSE helpers core + OpenSSL

Build Options

Option Default Description
NEOGRAPH_BUILD_LLM ON Build LLM provider module
NEOGRAPH_BUILD_MCP ON Build MCP client module
NEOGRAPH_BUILD_UTIL ON Build utility module
NEOGRAPH_BUILD_POSTGRES ON Build PostgresCheckpointStore (libpq)
NEOGRAPH_BUILD_SQLITE ON Build SqliteCheckpointStore (libsqlite3)
NEOGRAPH_BUILD_EXAMPLES ON Build example programs
NEOGRAPH_BUILD_CLAY_EXAMPLE OFF Build Clay+Raylib chatbot (fetches Raylib)
BUILD_SHARED_LIBS OFF Build neograph_* as .so/.dylib instead of .a (Linux/macOS — Windows DLL exports not yet wired)

Shared library mode

Pass -DBUILD_SHARED_LIBS=ON at configure time to ship libneograph_core.so, libneograph_llm.so, libneograph_mcp.so, libneograph_async.so, and libneograph_sqlite.so instead of static archives. Build-tree binaries get an $ORIGIN-relative RPATH so they find the libraries beside themselves with no LD_LIBRARY_PATH gymnastics.

Trade-offs (Linux, stripped, measured 2026-04-25):

Configuration Single agent binary N agents on same host
Static (default) ~2.2 MB per agent N × 2.2 MB
Shared ~0.25 MB per agent N × 0.25 MB + 13.1 MB shared .so set (one-time)

Crossover at N≈7 agents. For deployments shipping multiple NeoGraph agents on the same host (or for staged-rollout scenarios where one subsystem like the LLM provider is patched independently of the rest) shared mode is strictly better. For a single-agent embedded edge deployment, static keeps everything in one self-contained binary.

Patch-update size example: replacing libneograph_llm.so (one subsystem, ~4 MB) updates every agent on the host without rebuilding or redeploying any of them.

Windows: BUILD_SHARED_LIBS=ON will warn at configure time and fail at link with undefined symbols — the public-symbol __declspec annotations have not yet been added to the headers. Use STATIC on Windows until that work lands.

Benchmarks

Engine overhead vs. Python graph/pipeline frameworks

Matched-topology, zero-I/O workloads: graph compiled once, invoked in a hot loop. Measures what the engine itself costs (dispatch, state writes, reducer calls) — no LLM, no sleep, no network.

NeoGraph vs Python frameworks — per-iteration latency and peak RSS

Per-iteration engine overhead (µs, lower is better). All rows measured 2026-04-22 on the same x86_64 Linux host. NeoGraph built with Release -O3 -DNDEBUG (10-run median); Python rows are 3-run median through CPython 3.12.3.

Framework seq (3-node chain) par (fan-out 5 + join) seq vs. NeoGraph
NeoGraph 3.0 5.0 µs 11.8 µs
Haystack 2.28.0 144.1 µs 290.0 µs 28.8×
pydantic-graph 1.85.1 235.9 µs 286.1 µs¹ 47.2×
LangGraph 1.1.9 656.7 µs 2,348.7 µs 131.3×
LlamaIndex Workflow 0.14.21 1,780.3 µs 4,683.5 µs 356.1×
AutoGen GraphFlow 0.7.5 3,209.2 µs 7,292.7 µs 641.8×

¹ pydantic-graph is a single-next-node state machine and cannot fan out; par is a serial 6-node emulation.

Whole-process metrics (warm-up + both workloads, 10k seq + 5k par iters):

NeoGraph 3.0 best Python (Haystack) worst (AutoGen)
Total elapsed ~0.16 s 2.91 s 68.29 s
Peak RSS 4.8 MB 80.3 MB 52.4 MB²
Parallel fan-out executor asio::experimental::make_parallel_group single-thread asyncio (GIL) single-thread asyncio (GIL)

² AutoGen has a smaller RSS than LlamaIndex but its per-iter cost is 64× higher — different tradeoff axes. Full matrix in benchmarks/README.md.

Engine overhead disappears under LLM latency. A 500 ms OpenAI round trip swamps every engine; the per-iter gap only shows up in non-LLM nodes (data transforms, routing decisions, pure-compute tool calls) and in dense agent orchestration. Where it does show up, it shows up big: on a Raspberry Pi 4 / Jetson Nano / any SBC-class target, a 10–20× RAM delta is the difference between "fits" and "swap thrash."

Reproduction and methodology: benchmarks/README.md.

Burst concurrency (1 CPU / 512 MB sandbox)

What happens under thousands of simultaneous requests? Burst test: N requests submitted at t=0 to each engine, all-in / all-wait, inside a Docker cgroup limited to 1 CPU and 512 MB RAM — roughly a Raspberry Pi 4 process budget.

Tail latency — P99 per request

Throughput under concurrent load

Peak resident memory

At N=10,000 concurrent requests in asyncio mode (the default deployment shape for every Python framework):

Engine Wall P99 latency Peak RSS Status
NeoGraph 3.0 52 ms 7 µs 5.5 MB ✅ 10000 / 0
pydantic-graph 886 ms 158 µs 42.6 MB ✅ 10000 / 0
Haystack 3.1 s 2.9 s 130.7 MB ✅ 10000 / 0
LangGraph 23.4 s 23.0 s 416.2 MB ✅ 10000 / 0
LlamaIndex OOM killed
AutoGen OOM killed

Two frameworks don't complete — LlamaIndex Workflow and AutoGen GraphFlow exhaust the 512 MB cgroup and get OOM-killed before 10k concurrent coroutines can drain. The remaining Python frameworks degrade rather than die, but their P99 latency grows linearly with N because the CPython GIL serializes every coroutine's CPU work. This is not a LangGraph-specific pathology — it shows up in every Python asyncio runtime.

NeoGraph 3.0 beats every Python asyncio runtime on throughput, tail latency, and RSS: 7 µs P99 at N=10k, ~76× lower RSS than LangGraph at the same load, and 3 orders of magnitude ahead of the GIL-serialized Python curves. Even pydantic-graph — the leanest Python state-machine — sits at 158 µs P99 and ~8× NeoGraph's RSS.

multiprocessing.Pool mode bypasses the GIL across worker processes but saturates at pool size and pays fork + pickle overhead; full numbers and the mp-mode story are in benchmarks/concurrent/CONCURRENT.md.

Size & cold-start footprint (Plan & Executor demo)

All numbers below were measured on x86_64 Linux (GCC 13) using example_plan_executor — a self-contained Plan & Executor demo that runs a 5-way Send fan-out, crashes sub-topic #2 on the first run, and resumes with the failure cleared. No LLM calls, no API keys, no network.

Binary size (MinSizeRel + static libstdc++ + strip)

Build configuration Size
MinSizeRel -Os, static libstdc++, --gc-sections, stripped 1,203 KB (1.2 MB)

The MinSizeRel binary's only dynamic dependency is libc.so.6libstdc++ and libgcc_s are linked in statically. Drop it onto any Linux host with a matching libc and it runs. 3.0 is ~80 KB larger than 2.0 because asio's coroutine machinery (steady_timer, make_parallel_group, use_future) is pulled into the engine path; Taskflow was header-only and --gc-sections stripped most of it anyway, so its removal doesn't offset the coroutine growth.

Runtime footprint

Metric Value
Peak RSS (full Plan & Executor run, crash + resume included) 2.9 MB
Wall-clock (cold start → both phases complete) ~720 ms
Dynamic dependencies libc.so.6 only

example_plan_executor sleeps 120 ms per Send target to simulate an LLM call; the 5-way fan-out runs serially on the default single-threaded super-step loop (5 × 120 ms × 2 phases ≈ wall time). Call engine->set_worker_count(N) after compile() to get the 2.x-style multi-threaded fan-out (cuts this demo's wall time roughly in half on a 2-core host). Steady-state footprint (RSS) is unchanged between 2.0 and 3.0.

Reproduction

git clone https://github.com/fox1245/NeoGraph.git
cd NeoGraph

cmake -B build-minsize -S . \
    -DCMAKE_BUILD_TYPE=MinSizeRel \
    -DNEOGRAPH_BUILD_MCP=OFF \
    -DNEOGRAPH_BUILD_TESTS=OFF \
    -DCMAKE_CXX_FLAGS="-ffunction-sections -fdata-sections" \
    -DCMAKE_EXE_LINKER_FLAGS="-Wl,--gc-sections -static-libstdc++ -static-libgcc"

cmake --build build-minsize --target example_plan_executor -j$(nproc)

strip --strip-all build-minsize/example_plan_executor
ls -la    build-minsize/example_plan_executor        # binary size
ldd       build-minsize/example_plan_executor        # dynamic deps (libc only)
/usr/bin/time -v build-minsize/example_plan_executor  # peak RSS + wall time

What the numbers mean for embedded / robotics

  • 1.1 MB static binary fits a Docker scratch image at ~1 MB, fits on-board flash of a Pixhawk companion computer, fits comfortably in a Jetson Orin boot partition. Python + LangGraph does not.
  • 2.9 MB RSS means you can host 100+ concurrent agent sessions on an RPi Zero 2W (512 MB RAM) by sharing one compiled engine across threads — the Concurrency & Async section covers the pattern.
  • < 250 ms cold start fits inside a drone watchdog reset window; a Python LangGraph process still hasn't finished import by then.
  • libc.so.6 only makes cross-compilation trivial: pick glibc or musl and link — no transitive dependency hell.

Acknowledgments

Inspired by:

  • LangGraph — Graph agent orchestration for Python
  • agent.cpp — Local LLM agent framework for C++
  • asio — Cross-platform C++ networking and coroutine primitives (the 3.0 engine runtime)
  • Clay — High-performance UI layout library

Previously (2.x): also built on Taskflow for parallel fan-out. 3.0 replaced that path with asio::experimental::make_parallel_group to unify sync and async execution on one coroutine runtime.

License

MIT License. See LICENSE for details.

Third-party licenses: THIRD_PARTY_LICENSES.md

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neograph_engine-0.2.0.tar.gz (2.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

neograph_engine-0.2.0-cp313-cp313-win_amd64.whl (6.4 MB view details)

Uploaded CPython 3.13Windows x86-64

neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.34+ x86-64

neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.34+ ARM64

neograph_engine-0.2.0-cp313-cp313-macosx_14_0_arm64.whl (4.6 MB view details)

Uploaded CPython 3.13macOS 14.0+ ARM64

neograph_engine-0.2.0-cp312-cp312-win_amd64.whl (6.4 MB view details)

Uploaded CPython 3.12Windows x86-64

neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.34+ x86-64

neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.34+ ARM64

neograph_engine-0.2.0-cp312-cp312-macosx_14_0_arm64.whl (4.6 MB view details)

Uploaded CPython 3.12macOS 14.0+ ARM64

neograph_engine-0.2.0-cp311-cp311-win_amd64.whl (6.4 MB view details)

Uploaded CPython 3.11Windows x86-64

neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.34+ x86-64

neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.34+ ARM64

neograph_engine-0.2.0-cp311-cp311-macosx_14_0_arm64.whl (4.6 MB view details)

Uploaded CPython 3.11macOS 14.0+ ARM64

neograph_engine-0.2.0-cp310-cp310-win_amd64.whl (6.4 MB view details)

Uploaded CPython 3.10Windows x86-64

neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.34+ x86-64

neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.34+ ARM64

neograph_engine-0.2.0-cp310-cp310-macosx_14_0_arm64.whl (4.6 MB view details)

Uploaded CPython 3.10macOS 14.0+ ARM64

neograph_engine-0.2.0-cp39-cp39-win_amd64.whl (6.4 MB view details)

Uploaded CPython 3.9Windows x86-64

neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.34+ x86-64

neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.34+ ARM64

neograph_engine-0.2.0-cp39-cp39-macosx_14_0_arm64.whl (4.6 MB view details)

Uploaded CPython 3.9macOS 14.0+ ARM64

File details

Details for the file neograph_engine-0.2.0.tar.gz.

File metadata

  • Download URL: neograph_engine-0.2.0.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for neograph_engine-0.2.0.tar.gz
Algorithm Hash digest
SHA256 a2bfc691c85f8134597ec62a4b46a41816e442a61a00105d9c64dbd544fad77c
MD5 ec826c7b616d13e71900906b9d290d7d
BLAKE2b-256 053a15883b1fe355e9c0b6ebb1ecdf569d5f70fc71a3cb52747cdd8704c5ba26

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0.tar.gz:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 f9b597986f34637515ebb796d4a00850e1061122052fa6263336bb84de25c6e6
MD5 1b5307900ed36bcb7c48e13593bdbcbd
BLAKE2b-256 fb92664c180a23f1e352947ecad0c6f47c938f67d7dcf89d4aeef13747a75527

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp313-cp313-win_amd64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 f05df4851886fa22d9d1b20aaf3dcb9422a54cf369a263bacc6078691d3cc090
MD5 2ca736ede40fd13c080ee4cfee525bdc
BLAKE2b-256 874b522ca3e3e65f195ddfdf3d10842f9353ff71cd0e0e900f2725e7a0466c26

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_x86_64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 982c684e0cb358f2e05ee253a831e0e795a89c0c48a81db62a328bd9549bcc39
MD5 a12c5c6f75169f749d2b85d361855b79
BLAKE2b-256 147372befe9c83ccb712956594746b9d9d0cec26286386f5fea6445eae85bfd3

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp313-cp313-manylinux_2_34_aarch64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp313-cp313-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp313-cp313-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 0a5a244b103a35a6a0aa789ce89e097911779423e1d115459ace04a5ea00b331
MD5 262fbdb6e41b03cf5257c220916b9493
BLAKE2b-256 3a4caa331032bd805b61432e3d7ba1a7cc1d4f3605eb806b56f51e7fbf5ab945

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp313-cp313-macosx_14_0_arm64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 f64e55862e02ad64b279e476c8e38edbaf734f26d0f77b369a9884622b8f6b5b
MD5 886c348f5b02f9bb32a80f76bf52d175
BLAKE2b-256 1f7b20ff4c7440ca47d119ce7f90880cef71f764be45a9a0d14e960e7b9baf22

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp312-cp312-win_amd64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 40cb52679a2492d6c294abf2ca20d36023ccf1b11313d9147866e115a96d400f
MD5 f1f5f6fb1b190685d74ce7451cac5c35
BLAKE2b-256 fade5f6d790a9282e82df83186c1803f564c3ccfd8bd8b8a19a1e15b33661176

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_x86_64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 a41f36a6bb2ce5db7180a8bb183a16372f0f55d89d1d3aebd5095d995f5d4af2
MD5 445a6a69b475c89e1114629b6a07c5e5
BLAKE2b-256 0d6711f13c87e1a3570c3962b4c6555418ed2c1a9ca9c1d7843f18872a9fd2ef

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp312-cp312-manylinux_2_34_aarch64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp312-cp312-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp312-cp312-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 cf2c0059dd71e29a8692c1f04d61a59214be5b2bb55df24b9f2823019d2203ae
MD5 4ae8d03b6619953d4d5b5ea35eae851e
BLAKE2b-256 f9a855eb90f7fc2e26ccad37ac6895460b8ee411233c4d59fae2e322d4df6c6c

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp312-cp312-macosx_14_0_arm64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 cd9c70d86e45737723ddbe074c0eda8a66203b29966f4ac90fbe17f1c140e45b
MD5 db3f1ad4f38a1e4eab4e4fd0533571e9
BLAKE2b-256 196d74815be3ba394d7943e8150bb11cf3defb0dcecd04f62281055a68baf9d1

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp311-cp311-win_amd64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 10407fd99c98b3358719383719cb74499aa425dd7ceed933d2954a6afe266e0f
MD5 aa4a76404e668705e8d89d6d67fdcf39
BLAKE2b-256 9aa2ef889bf2ad5a2e134adf4aa083418928ea54630050838597560fdcc461c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_x86_64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 9cb29d45223f8683695e1f95bd89ac96811304ec52ffac3ffccb3eb09367743b
MD5 7699fd03a8a46a5781ef1676e3c233b8
BLAKE2b-256 9d9c491880b30d1a518a1b942b7be5024358e31db10b3cf887d2db97b9f32130

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp311-cp311-manylinux_2_34_aarch64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp311-cp311-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp311-cp311-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 8612ed3f118cab86f71b1ffae787136cec605959cd41eb1e9780b59f69dbbbc6
MD5 499f9a27a5f13d333fe4994aa0fafaa0
BLAKE2b-256 bee7046edc75dd3c81dc1f539b0b76af1842bf131ba3c0675ae3a7141d452bd5

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp311-cp311-macosx_14_0_arm64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 5a5e895129f2f04fe157b8e3afaaad44275076fd5f5b946db3774f62be218ac7
MD5 3a54ec6009acfbe40857fc9d3f71d1cb
BLAKE2b-256 d3018e48096ac1132574674e446345bf4d793fc1ed0aabea8739fe3141cc4dea

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp310-cp310-win_amd64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 a3bca4b50f7b97c606d5bcead922d411411497c09d52703beb3e3f94391f6ce2
MD5 1f133ca2cdbc3e00d91782d938b3d74d
BLAKE2b-256 cee8aff129e16e4ae037a74ac601884e949804b21b879d0495bfb172239aeb92

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_x86_64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 83d0a3942e2dab0fd0cda70f462ff508478a3afc01d1b0e74acdb49bb4ea6d55
MD5 d84449904184c24f55ba6593e8bf8f6c
BLAKE2b-256 4472dd28aba9e711cbea0f7b1053d3ebfb890731132a67cc45d4b51b9fb0b990

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp310-cp310-manylinux_2_34_aarch64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp310-cp310-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp310-cp310-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 b89dd31dcbb09b9041deb185eda600273808af5140e0bd3b1faff8b6293148a4
MD5 b03e3457753e57e1e56dd42af9da9dcc
BLAKE2b-256 20f8e601b7f790af606a04119c6d4429e105030db32f5d591f897e024e7c2df2

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp310-cp310-macosx_14_0_arm64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 fa359c78a1366fc3353bf5d4c3ea196c8a593035f9a8bfff846b3966162ae471
MD5 2822f50692525ff3d441537f9540a034
BLAKE2b-256 5abecbc85ff336e0e450e01ab1b420b17086705ab4ebf34ffc032a87f9931ec8

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp39-cp39-win_amd64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 b57abc75b3e6f33f68e3027e9b23f6405c4da1f70f9a052c9ab50dd85033dd91
MD5 ec3c6466362dd8f2143cc2c7a85c48af
BLAKE2b-256 dc763876921e28ea6ace8ae039ff0aaf496ccb08e5f2e51a7b13c58c231dc054

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_x86_64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 7cda76383d70a3f4c1670fff16360ba155d265fd2a3847970fc691a5cab45478
MD5 7afc094f4257ac86c8cf84d731b79829
BLAKE2b-256 e74cd507cc0a1aad3a7dd4e740c469ef1073a6acd5aaee3ec196c5a14ca2403f

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp39-cp39-manylinux_2_34_aarch64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neograph_engine-0.2.0-cp39-cp39-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for neograph_engine-0.2.0-cp39-cp39-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 20ba6041cf6b9863468a5503333daf8cb967cb6c9c1007024a25ed9f2261945f
MD5 77f6d10e62ca721bd241ec14f5258a9a
BLAKE2b-256 0c599060659dbe5b295babd36847b26a4b6fba9000a09ab3cd0b150165cf52b1

See more details on using hashes here.

Provenance

The following attestation bundles were made for neograph_engine-0.2.0-cp39-cp39-macosx_14_0_arm64.whl:

Publisher: wheels.yml on fox1245/NeoGraph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page