Skip to main content

6D structural intelligence for directed graphs. Six numbers per node. Sub-millisecond.

Project description

SemanticEmbed SDK

Structural intelligence for directed graphs. Six numbers per node. Sub-millisecond.

SemanticEmbed computes a 6-dimensional structural encoding for every node in a directed graph. From a bare edge list -- no runtime telemetry, no historical data, no tuning -- it produces six independent measurements that fully describe each node's structural role.

Validated against production incidents. In a blind test against a live production environment (100+ services, 2,500+ incidents over 30 days), the majority of topology-relevant incidents occurred on nodes that 6D structural analysis had flagged as risky -- from the call graph alone, before any incident occurred.


Why 6D?

Observability tools tell you what broke. SemanticEmbed tells you what will break -- from topology alone.

  • No agents, no instrumentation -- just an edge list
  • Sub-millisecond -- encodes 100+ node graphs in <1ms
  • Works on any directed graph -- microservices, AI agent pipelines, data workflows, CI/CD
  • Complementary structural axes -- six dimensions, each captures risk signals the others cannot

Try It Now

Open the Interactive Demo in Google Colab -- runs in your browser, nothing to install locally.


Install

pip install semanticembed

Free tier: Up to 50 nodes per graph. No signup required.


Quick Start

from semanticembed import encode, report

# Any directed graph as an edge list
edges = [
    ("frontend", "api-gateway"),
    ("api-gateway", "order-service"),
    ("api-gateway", "user-service"),
    ("order-service", "payment-service"),
    ("order-service", "inventory-service"),
    ("payment-service", "database"),
]

# Compute the 6D encoding (sub-millisecond)
result = encode(edges)

# Six structural measurements per node
for node, vector in result.vectors.items():
    print(f"{node}: {vector}")

# Structural risk report
print(report(result))

Output:

STRUCTURAL RISK REPORT
======================

AMPLIFICATION RISKS (high fanout, high criticality):
  - api-gateway    | fanout=0.667 | criticality=0.556

CONVERGENCE SINKS (low independence, many upstream callers):
  - database       | independence=0.000

STRUCTURAL SPOF (low independence, high upstream dependency):
  - api-gateway    | independence=0.000 | every request flows through this node

What It Finds That Other Tools Miss

Your current tools SemanticEmbed
This service has high latency This service is on 89% of all paths (structural SPOF)
This service had 5 errors This service fans out to 12 downstream services (amplification risk)
This service is healthy This service has zero lateral redundancy (convergence sink)

Runtime monitoring tells you what is slow now. Structural analysis tells you what will cause cascading failures regardless of current load.


The Six Dimensions

Every node gets six independent structural measurements:

Dimension What It Measures Risk Signal
Depth Position in the execution pipeline (0.0 = entry, 1.0 = deepest) Deep nodes accumulate upstream latency
Independence Lateral redundancy at the same pipeline stage Low independence = structural chokepoint
Hierarchy Module or group membership Cross-module dependencies = blast radius
Throughput Fraction of total traffic flowing through the node High throughput + low independence = hidden bottleneck
Criticality Fraction of end-to-end paths depending on this node High criticality = SPOF
Fanout Broadcaster (1.0) vs aggregator (0.0) High fanout = amplification risk

These six properties capture complementary structural information. Each dimension provides risk signals the others cannot.

See docs/dimensions.md for the full reference.


Use Cases

Microservice architectures -- Find SPOFs, amplification cascades, and convergence bottlenecks in any service mesh. Works with Kubernetes, Istio, OTel traces, or static architecture diagrams.

AI agent pipelines -- Identify vendor concentration risk, gateway bottlenecks, and guardrail single points of failure in LLM orchestration graphs.

CI/CD and data pipelines -- Detect structural fragility in build graphs, ETL workflows, and deployment pipelines before they cause cascading failures.

Architecture drift monitoring -- Compare structural fingerprints across releases. Know exactly which services changed structural role and by how much.


Notebooks

Step-by-step Colab notebooks. Click to open, run in your browser.

Notebook Use Case What You Learn
01 - Quickstart Getting started Install, encode a graph, read the risk report
02 - Dimensions Deep Dive Understanding 6D What each dimension means, with worked examples
03 - Drift Detection Architecture drift Compare graph versions, detect structural changes
04 - Bring Your Own Graph Any graph Load from JSON, OTel traces, or Kubernetes
05 - AI Agent Pipelines AI/LLM agents Vendor concentration, gateway bottlenecks, guardrail SPOFs
06 - CI/CD & Data Pipelines CI/CD & ETL Build graph fragility, pipeline bottlenecks, drift gates
07 - OpenTelemetry OTel traces Extract edges from traces, structural analysis, CI/CD gates
08 - Qwen Compression LLM compression Structural pruning of Qwen2.5-7B, 10% speedup at Grade A

Extract Edges from Infrastructure

Don't have an edge list? The extract module parses common infrastructure files automatically.

import semanticembed as se

# From Docker Compose
edges = se.extract.from_docker_compose("docker-compose.yml")

# From Kubernetes manifests
edges = se.extract.from_kubernetes("k8s/")

# From GitHub Actions workflows
edges = se.extract.from_github_actions(".github/workflows")

# From Terraform
edges = se.extract.from_terraform("infra/")

# From CloudFormation (YAML or JSON)
edges = se.extract.from_cloudformation("template.yaml")

# From AWS CDK (Python)
edges = se.extract.from_aws_cdk("app.py")

# From Pulumi (Python)
edges = se.extract.from_pulumi("__main__.py")

# From Python imports (module dependency graph)
edges = se.extract.from_python_imports("src/")

# From Node.js monorepo (inter-package dependencies)
edges = se.extract.from_package_json_workspaces(".")

# From OpenTelemetry traces (OTLP / Jaeger / Zipkin JSON)
edges = se.extract.from_otel_traces("traces.json")

# From AI agent frameworks (AST-only — no need to install the framework)
edges = se.extract.from_langgraph("workflow.py")   # StateGraph.add_edge / add_conditional_edges / set_entry_point
edges = se.extract.from_crewai("crew.py")          # Task(agent=...) / Task(context=...) / Crew(manager_agent=...)
edges = se.extract.from_autogen("agents.py")       # GroupChat(agents=...) / initiate_chat(...)

# Auto-detect everything in a directory
edges, sources = se.extract.from_directory(".")
print(f"Found {len(edges)} edges from {sources}")

# Then encode as usual
result = se.encode(edges)
print(result.table)

Requires pyyaml for YAML parsing: pip install 'semanticembed[extract]'

Trace ingestion (highest-fidelity edges)

Compose / k8s / Terraform describe deployment, not actual call edges. Real runtime traces are the only source with the actual call graph. v0.3.0 ships a deterministic parser for the three common JSON formats:

  • OTLP (OpenTelemetry Collector / SDK exports): {"resourceSpans": [...]}
  • Jaeger (jaeger-query API, jaeger-cli): {"data": [{"spans": [...]}]}
  • Zipkin (Zipkin v2 API): top-level array with localEndpoint.serviceName

Edges are emitted at the service level — same-service spans roll up. Place a traces.json (or otel.json / jaeger.json / zipkin.json) at your repo root and from_directory() will pick it up.

Live observability connectors

Static analysis is great for repos. For running infra, pull traces directly:

from semanticembed import live

# Dynatrace — Smartscape services + call relationships
edges = live.from_dynatrace(
    env_url="https://abc12345.live.dynatrace.com",
    api_token=os.environ["DYNATRACE_API_TOKEN"],
)

# Honeycomb — Query API over a dataset
edges = live.from_honeycomb(
    dataset="my-app-prod",
    api_key=os.environ["HONEYCOMB_API_KEY"],
    lookback_seconds=900,
)

# Datadog — Spans Search API
edges = live.from_datadog(
    api_key=os.environ["DD_API_KEY"],
    app_key=os.environ["DD_APP_KEY"],
    env="prod",
    lookback="now-30m",
)

AI agent frameworks

The three popular Python agent frameworks each have an explicit graph-building API. Static AST parsing extracts the actual call graph the framework will run. The SDK does not import or run the framework — you don't need pip install langgraph to extract from a LangGraph script.

LangGraphg.add_edge, g.add_conditional_edges (with explicit path_map), g.set_entry_point, g.set_finish_point. The sentinels START and END are emitted as literal node names.

CrewAITask(agent=researcher) produces researcher -> task_var; Task(context=[t1, t2]) produces t1 -> task_var / t2 -> task_var; Crew(manager_agent=mgr) adds a mgr -> agent fan-out.

AutoGenGroupChat(agents=[a, b, c]) with an explicit GroupChatManager produces a star (manager -> a, -> b, -> c). Without a manager, it's fully connected. x.initiate_chat(y) always produces x -> y.

from_directory() auto-detects these by scanning Python files for the relevant import statements and only running the matching parser on those files (cheap and accurate vs. walking the whole tree).

Blending sources cleanly

Combining traces + compose + Python imports usually produces the same logical service under several names (auth-svc, auth_svc, AuthService). Use dedupe_edges to canonicalize:

compose_edges, _ = se.extract.from_directory(".")
trace_edges = se.extract.from_otel_traces("traces.json")

edges = se.dedupe_edges(
    list(compose_edges) + trace_edges,
    normalize="snake",                          # AuthService -> auth_service
    aliases={"auth_svc": "auth_service"},       # explicit overrides
)
result = se.encode(edges)

Modes: "none" (default), "snake", "lower", "kebab". Self-loops are dropped by default.


LLM-Powered Analysis

Get plain-language explanations and actionable recommendations using your own LLM key.

import semanticembed as se

result = se.encode(edges)

# One-shot analysis (OpenAI, Anthropic, or local Ollama)
print(se.explain(result, model="gpt-4o-mini", api_key="sk-..."))
print(se.explain(result, model="claude-sonnet-4-5", api_key="sk-ant-..."))
print(se.explain(result, model="ollama/llama3"))  # local, no key needed

# Follow-up questions
answer = se.ask(result, "What happens if the database goes down?",
                model="gpt-4o-mini", api_key="sk-...")

The LLM sees only the encoding output (6D vectors, risk report) -- never the algorithm.


Structural Diff

Compare two graph versions in one call:

changes = se.encode_diff(edges_v1, edges_v2)
for node, deltas in changes.items():
    for dim, info in deltas.items():
        print(f"{node}.{dim}: {info['before']:.3f} -> {info['after']:.3f}")

Agent

An autonomous agent that scans your repo, extracts edges, encodes, and explains results interactively. Choose your LLM backend:

# Claude agent (installs the agent code + the Anthropic agent SDK)
pip install 'semanticembed[agent-claude]'
export ANTHROPIC_API_KEY=sk-ant-...
semanticembed-agent              # interactive
semanticembed-agent --ask "What is my biggest SPOF?"

# Gemini agent
pip install 'semanticembed[agent-gemini]'
export GOOGLE_API_KEY=...
semanticembed-gemini-agent

Both binaries are also reachable as python -m semanticembed.agent / python -m semanticembed.agent.gemini_agent.

The agent has 7 tools: scan, extract (docker-compose, k8s, Python imports), encode, diff, and simulate architecture changes. See src/semanticembed/agent/README.md for full docs.

What gets sent where

Be explicit about data egress before pointing the agent at private architecture:

  • Claude agent (python -m agent): the LLM reads tool outputs as conversation context, so the contents of docker-compose.yml, Kubernetes manifests, Terraform .tf files, Python source, and package.json files in your project go to Anthropic's API along with your prompts. Conversation history is governed by Anthropic's data-use policies.
  • Gemini agent (python -m agent.gemini_agent): same data flow, sent to Google's API instead.
  • Skill (skill/analyze.py): runs Ollama on your machine. Raw input never leaves localhost unless you set SEMBED_OLLAMA_URL to a remote host.
  • Cloud encode() call: only the edge list (node names, e.g. ["frontend", "auth"]) goes to the SemanticEmbed Railway endpoint. File contents are never sent.

If your topology is sensitive, prefer the skill (local Ollama) or pre-extract edges deterministically with se.extract.from_directory() and call se.encode() directly — that path sends only the edge list.


Example Graphs

The examples/ directory contains edge lists for well-known architectures:

File Application Nodes Edges
google_online_boutique.json Google Online Boutique (microservices) 11 15
weaveworks_sock_shop.json Weaveworks Sock Shop (microservices) 15 15
ai_agent_pipeline.json Multi-agent LLM orchestration 12 15
cicd_pipeline.json CI/CD build pipeline 13 17

React Components

Drop-in React components for rendering SDK results. See examples/react/ for the full source.

Component What it renders
useSemanticEmbed.ts React hook — call encode() from your frontend
RiskTable.tsx Sortable risk table with severity badges
RadarChart.tsx 6D radar chart comparing node profiles
TopologySummary.tsx KPI cards + risk breakdown
import { useSemanticEmbed } from './useSemanticEmbed';
import { RiskTable } from './RiskTable';

function App() {
  const { result, loading, encode } = useSemanticEmbed();
  return (
    <>
      <button onClick={() => encode([["A","B"],["B","C"],["C","D"]])}>Analyze</button>
      {result && <RiskTable risks={result.risks} />}
    </>
  );
}

Input Format

SemanticEmbed accepts any directed graph as an edge list.

# Python tuples
edges = [("A", "B"), ("B", "C")]
result = encode(edges)

# JSON file
result = encode_file("my_graph.json")

JSON format:

{
  "edges": [
    {"source": "A", "target": "B"},
    {"source": "B", "target": "C"}
  ]
}

See docs/input_format.md for the full spec.


Documentation

Document Description
docs/getting_started.md Install, encode, read results, export -- one page
docs/api_reference.md Every function, class, parameter, and return type
docs/dimensions.md The six structural dimensions -- full reference
docs/input_format.md Edge list input specification
docs/output_format.md Encoding output and risk report format

License

SemanticEmbed SDK is proprietary software distributed as a compiled package. Free tier available for graphs up to 50 nodes. See LICENSE for terms.

Patent pending. Application #63/994,075.


Contact

Email jeffmurr@seas.upenn.edu

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

semanticembed-0.7.0.tar.gz (112.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

semanticembed-0.7.0-py3-none-any.whl (53.8 kB view details)

Uploaded Python 3

File details

Details for the file semanticembed-0.7.0.tar.gz.

File metadata

  • Download URL: semanticembed-0.7.0.tar.gz
  • Upload date:
  • Size: 112.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semanticembed-0.7.0.tar.gz
Algorithm Hash digest
SHA256 e0b0173e79559afc7e40f4e371aa96e3da86a936cca90d70aa89dba1cb4d6d25
MD5 7ebee04fc126ca32805fad71d44d0b02
BLAKE2b-256 f3c4e026b781ed7bfe82e4c45d94ee48271136a1042cc4028edfebb8ea6bcbd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for semanticembed-0.7.0.tar.gz:

Publisher: publish.yml on jmurray10/semanticembed-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semanticembed-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: semanticembed-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 53.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semanticembed-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1e46697dbef394dc9bdce76e4d725b715af113f29ce14c1831d2b20bcbf4bed5
MD5 a95f783ced429696c5184e73368961dc
BLAKE2b-256 6e9a1bc9297c89821b7ffca326220953a37d63dd308cec36e2f45b6b07267334

See more details on using hashes here.

Provenance

The following attestation bundles were made for semanticembed-0.7.0-py3-none-any.whl:

Publisher: publish.yml on jmurray10/semanticembed-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page