Skip to main content

6D structural intelligence for directed graphs. Six numbers per node. Sub-millisecond.

Project description

SemanticEmbed SDK

PyPI Python CI License Patent Pending Changelog

Structural risk for directed graphs — especially AI agent pipelines. Six numbers per node. Sub-millisecond.

SemanticEmbed computes a 6-dimensional structural encoding for every node in a directed graph. From a bare edge list — no runtime telemetry, no historical data, no tuning — it produces six independent measurements that fully describe each node's structural role.

Designed for the topologies traditional observability misses:

  • AI agent pipelines — vendor concentration, gateway bottlenecks, guardrail SPOFs in LangGraph / CrewAI / AutoGen workflows
  • Microservices — SPOFs, amplification cascades, convergence sinks across compose / k8s / Istio
  • CI/CD and data pipelines — build graph fragility, ETL bottlenecks, drift gates

Live demos:

  • Hugging Face Space — paste a LangGraph / CrewAI / AutoGen file, get the encoding + risk findings (zero install, ML-researcher framing).
  • Demo dashboard — Vercel-hosted demo with the 4 reference apps + interactive 6D explorer (auth required).

Validated against production incidents. In a blind test against a live production Dynatrace environment (108 services, 569 topology-relevant incidents over 30 days), 79.6% of incidents (453/569) occurred on nodes that 6D structural analysis had flagged as risky — from the call graph alone, before any incident occurred. See validation methodology.


Why 6D?

Observability tools tell you what broke. SemanticEmbed tells you what will break — from topology alone.

  • No agents, no instrumentation — just an edge list
  • Sub-millisecond — encodes 100+ node graphs in <1ms
  • Works on any directed graph — AI agent pipelines, microservices, data workflows, CI/CD
  • Complementary structural axes — six dimensions, each captures risk signals the others cannot
  • 14 deterministic edge parsers + 3 live connectors — go from real infra to encoded result in 2 lines

Install

pip install semanticembed              # core
pip install 'semanticembed[extract]'   # adds pyyaml for k8s/CFN/CDK parsing
pip install 'semanticembed[agent-claude]'  # adds Claude agent CLI

Free tier: up to 50 nodes per graph, no signup. See CHANGELOG for what's new.


Quick Start — from real infra to risk in 2 lines

import semanticembed as se

# Auto-discover edges from any directory: docker-compose, k8s, terraform,
# CloudFormation, AWS CDK, Pulumi, GitHub Actions, package.json,
# pyproject.toml, OTel traces, Python imports, LangGraph, CrewAI, AutoGen.
edges, sources = se.extract.from_directory(".")
print(f"Found {len(edges)} edges from {sources}")

# 6D encode + structural risk analysis (sub-millisecond on the server side).
result = se.encode(edges)
print(result.table)
print(se.report(result))

Output:

STRUCTURAL RISK REPORT
======================

AMPLIFICATION RISKS (high fanout, high criticality):
  - api-gateway    | fanout=0.667 | criticality=0.556

CONVERGENCE SINKS (low independence, many upstream callers):
  - database       | independence=0.000

STRUCTURAL SPOF (low independence, high upstream dependency):
  - api-gateway    | independence=0.000 | every request flows through this node

Or try it without installingopen the Quickstart in Google Colab.


What It Finds That Other Tools Miss

Your current tools SemanticEmbed
This service has high latency This service is on 89% of all paths (structural SPOF)
This service had 5 errors This service fans out to 12 downstream services (amplification risk)
This service is healthy This service has zero lateral redundancy (convergence sink)

Runtime monitoring tells you what is slow now. Structural analysis tells you what will cause cascading failures regardless of current load.


The Six Dimensions

Every node gets six independent structural measurements:

Dimension What It Measures Risk Signal
Depth Position in the execution pipeline (0.0 = entry, 1.0 = deepest) Deep nodes accumulate upstream latency
Independence Lateral redundancy at the same pipeline stage Low independence = structural chokepoint
Hierarchy Module or group membership Cross-module dependencies = blast radius
Throughput Fraction of total traffic flowing through the node High throughput + low independence = hidden bottleneck
Criticality Fraction of end-to-end paths depending on this node High criticality = SPOF
Fanout Broadcaster (1.0) vs aggregator (0.0) High fanout = amplification risk

These six properties capture complementary structural information. Each dimension provides risk signals the others cannot.

See docs/dimensions.md for the full reference.


Use Cases

Microservice architectures -- Find SPOFs, amplification cascades, and convergence bottlenecks in any service mesh. Works with Kubernetes, Istio, OTel traces, or static architecture diagrams.

AI agent pipelines -- Identify vendor concentration risk, gateway bottlenecks, and guardrail single points of failure in LLM orchestration graphs.

CI/CD and data pipelines -- Detect structural fragility in build graphs, ETL workflows, and deployment pipelines before they cause cascading failures.

Architecture drift monitoring -- Compare structural fingerprints across releases. Know exactly which services changed structural role and by how much.


What's new in v0.7

  • live.from_dynatrace / from_honeycomb / from_datadog — pull real call edges from running infra (v0.5–v0.7)
  • OpenTelemetry trace ingestion — auto-detects OTLP / Jaeger / Zipkin (v0.3)
  • AI agent framework parsersfrom_langgraph, from_crewai, from_autogen, AST-only, no need to install the framework (v0.4)
  • IaC parsers — CloudFormation, AWS CDK (Python), Pulumi (Python) (v0.6)
  • Async surfaceawait aencode(...), aencode_diff() runs both encodes in parallel (v0.7.1)
  • encode(cache=True) — skip the HTTP round trip on repeat calls (v0.4.1)
  • dedupe_edges — canonicalize names when blending multiple sources (v0.3)
  • One-retry-on-5xx — every connector handles transient failures (v0.7.2)
  • semanticembed-agent console script — interactive shell for non-programmer users (v0.5.1)

Full details in the CHANGELOG.


Notebooks

Step-by-step Colab notebooks. Click to open, run in your browser.

Notebook Use Case What You Learn
01 - Quickstart Getting started Install, encode a graph, read the risk report
02 - Dimensions Deep Dive Understanding 6D What each dimension means, with worked examples
03 - Drift Detection Architecture drift Compare graph versions, detect structural changes
04 - Bring Your Own Graph Any graph Load from JSON, OTel traces, or Kubernetes
05 - AI Agent Pipelines AI/LLM agents Vendor concentration, gateway bottlenecks, guardrail SPOFs
06 - CI/CD & Data Pipelines CI/CD & ETL Build graph fragility, pipeline bottlenecks, drift gates
07 - OpenTelemetry OTel traces Extract edges from traces, structural analysis, CI/CD gates
08 - Qwen Compression LLM compression Structural pruning of Qwen2.5-7B, 10% speedup at Grade A

Extract Edges from Infrastructure

Don't have an edge list? The extract module parses common infrastructure files automatically.

import semanticembed as se

# From Docker Compose
edges = se.extract.from_docker_compose("docker-compose.yml")

# From Kubernetes manifests
edges = se.extract.from_kubernetes("k8s/")

# From GitHub Actions workflows
edges = se.extract.from_github_actions(".github/workflows")

# From Terraform
edges = se.extract.from_terraform("infra/")

# From CloudFormation (YAML or JSON)
edges = se.extract.from_cloudformation("template.yaml")

# From AWS CDK (Python)
edges = se.extract.from_aws_cdk("app.py")

# From Pulumi (Python)
edges = se.extract.from_pulumi("__main__.py")

# From Python imports (module dependency graph)
edges = se.extract.from_python_imports("src/")

# From Node.js monorepo (inter-package dependencies)
edges = se.extract.from_package_json_workspaces(".")

# From OpenTelemetry traces (OTLP / Jaeger / Zipkin JSON)
edges = se.extract.from_otel_traces("traces.json")

# From AI agent frameworks (AST-only — no need to install the framework)
edges = se.extract.from_langgraph("workflow.py")   # StateGraph.add_edge / add_conditional_edges / set_entry_point
edges = se.extract.from_crewai("crew.py")          # Task(agent=...) / Task(context=...) / Crew(manager_agent=...)
edges = se.extract.from_autogen("agents.py")       # GroupChat(agents=...) / initiate_chat(...)

# Auto-detect everything in a directory
edges, sources = se.extract.from_directory(".")
print(f"Found {len(edges)} edges from {sources}")

# Then encode as usual
result = se.encode(edges)
print(result.table)

Requires pyyaml for YAML parsing: pip install 'semanticembed[extract]'

Trace ingestion (highest-fidelity edges)

Compose / k8s / Terraform describe deployment, not actual call edges. Real runtime traces are the only source with the actual call graph. v0.3.0 ships a deterministic parser for the three common JSON formats:

  • OTLP (OpenTelemetry Collector / SDK exports): {"resourceSpans": [...]}
  • Jaeger (jaeger-query API, jaeger-cli): {"data": [{"spans": [...]}]}
  • Zipkin (Zipkin v2 API): top-level array with localEndpoint.serviceName

Edges are emitted at the service level — same-service spans roll up. Place a traces.json (or otel.json / jaeger.json / zipkin.json) at your repo root and from_directory() will pick it up.

Live observability connectors

Static analysis is great for repos. For running infra, pull traces directly:

from semanticembed import live

# Dynatrace — Smartscape services + call relationships
edges = live.from_dynatrace(
    env_url="https://abc12345.live.dynatrace.com",
    api_token=os.environ["DYNATRACE_API_TOKEN"],
)

# Honeycomb — Query API over a dataset
edges = live.from_honeycomb(
    dataset="my-app-prod",
    api_key=os.environ["HONEYCOMB_API_KEY"],
    lookback_seconds=900,
)

# Datadog — Spans Search API
edges = live.from_datadog(
    api_key=os.environ["DD_API_KEY"],
    app_key=os.environ["DD_APP_KEY"],
    env="prod",
    lookback="now-30m",
)

AI agent frameworks

The three popular Python agent frameworks each have an explicit graph-building API. Static AST parsing extracts the actual call graph the framework will run. The SDK does not import or run the framework — you don't need pip install langgraph to extract from a LangGraph script.

LangGraphg.add_edge, g.add_conditional_edges (with explicit path_map), g.set_entry_point, g.set_finish_point. The sentinels START and END are emitted as literal node names.

CrewAITask(agent=researcher) produces researcher -> task_var; Task(context=[t1, t2]) produces t1 -> task_var / t2 -> task_var; Crew(manager_agent=mgr) adds a mgr -> agent fan-out.

AutoGenGroupChat(agents=[a, b, c]) with an explicit GroupChatManager produces a star (manager -> a, -> b, -> c). Without a manager, it's fully connected. x.initiate_chat(y) always produces x -> y.

from_directory() auto-detects these by scanning Python files for the relevant import statements and only running the matching parser on those files (cheap and accurate vs. walking the whole tree).

Blending sources cleanly

Combining traces + compose + Python imports usually produces the same logical service under several names (auth-svc, auth_svc, AuthService). Use dedupe_edges to canonicalize:

compose_edges, _ = se.extract.from_directory(".")
trace_edges = se.extract.from_otel_traces("traces.json")

edges = se.dedupe_edges(
    list(compose_edges) + trace_edges,
    normalize="snake",                          # AuthService -> auth_service
    aliases={"auth_svc": "auth_service"},       # explicit overrides
)
result = se.encode(edges)

Modes: "none" (default), "snake", "lower", "kebab". Self-loops are dropped by default.


LLM-Powered Analysis

Get plain-language explanations and actionable recommendations using your own LLM key.

import semanticembed as se

result = se.encode(edges)

# One-shot analysis (OpenAI, Anthropic, or local Ollama)
print(se.explain(result, model="gpt-4o-mini", api_key="sk-..."))
print(se.explain(result, model="claude-sonnet-4-5", api_key="sk-ant-..."))
print(se.explain(result, model="ollama/llama3"))  # local, no key needed

# Follow-up questions
answer = se.ask(result, "What happens if the database goes down?",
                model="gpt-4o-mini", api_key="sk-...")

The LLM sees only the encoding output (6D vectors, risk report) -- never the algorithm.


Structural Diff

Compare two graph versions in one call:

changes = se.encode_diff(edges_v1, edges_v2)
for node, deltas in changes.items():
    for dim, info in deltas.items():
        print(f"{node}.{dim}: {info['before']:.3f} -> {info['after']:.3f}")

Agent

An autonomous agent that scans your repo, extracts edges, encodes, and explains results interactively. Choose your LLM backend:

# Claude agent (installs the agent code + the Anthropic agent SDK)
pip install 'semanticembed[agent-claude]'
export ANTHROPIC_API_KEY=sk-ant-...
semanticembed-agent              # interactive
semanticembed-agent --ask "What is my biggest SPOF?"

# Gemini agent
pip install 'semanticembed[agent-gemini]'
export GOOGLE_API_KEY=...
semanticembed-gemini-agent

Both binaries are also reachable as python -m semanticembed.agent / python -m semanticembed.agent.gemini_agent.

The agent has 7 tools: scan, extract (docker-compose, k8s, Python imports), encode, diff, and simulate architecture changes. See src/semanticembed/agent/README.md for full docs.

What gets sent where

Be explicit about data egress before pointing the agent at private architecture:

  • Claude agent (semanticembed-agent / python -m semanticembed.agent): the LLM reads tool outputs as conversation context, so the contents of docker-compose.yml, Kubernetes manifests, Terraform .tf files, Python source, and package.json files in your project go to Anthropic's API along with your prompts. Conversation history is governed by Anthropic's data-use policies.
  • Gemini agent (semanticembed-gemini-agent / python -m semanticembed.agent.gemini_agent): same data flow, sent to Google's API instead.
  • Claude Code skill (skill/analyze.py): runs inside Claude Code — uses the parent agent for any natural-language extraction, the SDK for the deterministic scan + encoding. No second LLM, no Ollama dependency.
  • Cloud encode() call: only the edge list (node names, e.g. ["frontend", "auth"]) goes to the SemanticEmbed Railway endpoint. File contents are never sent.

If your topology is sensitive, pre-extract edges deterministically with se.extract.from_directory() and call se.encode() directly — that path sends only the edge list.


Example Graphs

The examples/ directory contains ready-to-encode edge lists and parsable framework files. None of the .py examples need to be runnable — the SDK parses them via AST without importing the framework.

Edge-list JSON — load with se.encode_file(path):

File Application Nodes Edges
google_online_boutique.json Google Online Boutique (microservices) 11 15
weaveworks_sock_shop.json Weaveworks Sock Shop (microservices) 14 15
ai_agent_pipeline.json Multi-agent LLM orchestration 12 15
cicd_pipeline.json CI/CD build pipeline 12 17
sample_pipeline.json Minimal 7-node starter 7 8

AI-framework Python sources — parse with the matching extractor:

File Extractor Edges
langgraph_research_agent.py from_langgraph 6
crewai_content_pipeline.py from_crewai 11
autogen_codereview.py from_autogen 5

React Components

Drop-in React components for rendering SDK results. See examples/react/ for the full source.

Component What it renders
useSemanticEmbed.ts React hook — call encode() from your frontend
RiskTable.tsx Sortable risk table with severity badges
RadarChart.tsx 6D radar chart comparing node profiles
TopologySummary.tsx KPI cards + risk breakdown
import { useSemanticEmbed } from './useSemanticEmbed';
import { RiskTable } from './RiskTable';

function App() {
  const { result, loading, encode } = useSemanticEmbed();
  return (
    <>
      <button onClick={() => encode([["A","B"],["B","C"],["C","D"]])}>Analyze</button>
      {result && <RiskTable risks={result.risks} />}
    </>
  );
}

Input Format

SemanticEmbed accepts any directed graph as an edge list.

# Python tuples
edges = [("A", "B"), ("B", "C")]
result = encode(edges)

# JSON file
result = encode_file("my_graph.json")

JSON format:

{
  "edges": [
    {"source": "A", "target": "B"},
    {"source": "B", "target": "C"}
  ]
}

See docs/input_format.md for the full spec.


Documentation

Document Description
docs/getting_started.md Install, encode, read results, export -- one page
docs/api_reference.md Every function, class, parameter, and return type
docs/dimensions.md The six structural dimensions -- full reference
docs/input_format.md Edge list input specification
docs/output_format.md Encoding output and risk report format

License

SemanticEmbed SDK is proprietary software with public source code — the same model Stripe, Snowflake, and Anthropic use for their SDKs. Free tier covers graphs up to 50 nodes; paid tier unlocks larger graphs and continuous monitoring. See LICENSE and LICENSE-FAQ for terms and common questions.

Patent pending. Application #63/994,075.


Contact

Built by Jeff Murray (@jmurray10).

For algorithm / encoding / scoring questions (server-side, not in this repo): same email — please put [encoding] in the subject line.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

semanticembed-0.7.3.tar.gz (144.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

semanticembed-0.7.3-py3-none-any.whl (59.1 kB view details)

Uploaded Python 3

File details

Details for the file semanticembed-0.7.3.tar.gz.

File metadata

  • Download URL: semanticembed-0.7.3.tar.gz
  • Upload date:
  • Size: 144.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semanticembed-0.7.3.tar.gz
Algorithm Hash digest
SHA256 b04386e83a5bd775a4813e63905b99f14bf9a90bd61d3937fc2a2dc7b7cb70e2
MD5 7d2b34360a2e6b3a655a2e2b4b46ed64
BLAKE2b-256 69b2819fec5ca8c5b407126dc5753de0cb3e86ac7fab1d3f9a318298b5daab5d

See more details on using hashes here.

Provenance

The following attestation bundles were made for semanticembed-0.7.3.tar.gz:

Publisher: publish.yml on jmurray10/semanticembed-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semanticembed-0.7.3-py3-none-any.whl.

File metadata

  • Download URL: semanticembed-0.7.3-py3-none-any.whl
  • Upload date:
  • Size: 59.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semanticembed-0.7.3-py3-none-any.whl
Algorithm Hash digest
SHA256 b66bba35e2c2b55393abafc0d59c0df4ac9a5072a1bbffcbcc1a6699e2b1b636
MD5 bdad48806223ffd0c3e62049f92b3626
BLAKE2b-256 1f5209de73eaa151258a448f0e5cd37ab1f406ba5624e6bd603e6b7ddd393bc5

See more details on using hashes here.

Provenance

The following attestation bundles were made for semanticembed-0.7.3-py3-none-any.whl:

Publisher: publish.yml on jmurray10/semanticembed-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page