Skip to main content

Semvec — patent-pending persistent semantic state engine

Project description

Semvec

PyPI Python versions Wheel License Docs

Constant-cost semantic memory for LLM applications.

Semvec replaces unbounded conversation history with a fixed-size semantic state plus a tiered, content-aware memory. The cost of every LLM call stays constant — turn 10 and turn 10 000 carry the same input footprint — and the agent still has structured access to decisions, invariants, error patterns, and prior context across sessions.

Patent-pending technology — EP 25 188 105.8 - novelty acknowledged


Table of contents


What you get

Capability What it solves
Constant-size compressed context Per-call LLM input cost stops growing with conversation length. ~76 % token reduction on 48-turn runs.
Tiered memory with selective forgetting Three tiers (short / medium / long term) with retention scoring — frequently-accessed older memories outlive never-touched newer ones.
Domain anchors + resonance triggers Bias retrieval toward known domains or specific keywords without re-training. Lifts precision@3 from 86 % → 91.7 % on mixed-domain workloads.
Drop-in chat proxy Wrap any OpenAI-compatible LLM and get compressed context for free. Works with vLLM, LiteLLM, OpenRouter, Ollama out of the box.
Multi-agent coordination (Cortex) Run several agents that share an aggregated view, vote on proposals, and exchange checksummed state vectors.
Coding-agent compaction Persistent memory across coding sessions — design decisions, invariants, error patterns, code-pointer index, anti-resonance checks. MCP server for Claude Code & Cursor included.
REST API server semvec serve exposes the full surface over FastAPI: sessions, clusters, regions, observer, network, literal cache, Prometheus metrics.
Compliance pack Append-only event store, deterministic replay, GDPR Art. 17 forget with signed certificates, HMAC request signing, RS256 user JWTs.
Bring-your-own embedder Anything exposing get_embedding(text) → np.ndarray and get_dimension() → int works. SentenceTransformers, OpenAI, ONNX int8 — see the embedders guide.
One wheel, all platforms Python 3.10–3.14 via stable ABI. Pre-built wheels for Linux (x86_64 + aarch64), macOS (x86_64 + arm64), Windows (x86_64).

Installation

# Core only
pip install semvec

# With multi-agent coordination
pip install "semvec[cortex]"

# With coding-agent compaction (FastMCP server, Claude Code hooks)
pip install "semvec[coding]"

# Compliance pack (event store, retention, DSGVO forget, HMAC, RS256)
pip install "semvec[compliance]"
# When you also want the FastAPI compliance routes + middleware:
pip install "semvec[api,compliance]"

# REST API server
pip install "semvec[api]"
semvec serve --host 0.0.0.0 --port 8080

# Benchmark runners + optional Mem0 baseline
pip install "semvec[benchmarks,mem0]"

# Everything the developers use
pip install "semvec[cortex,coding,api,benchmarks,dev]"
Extra Pulls in When you need it
[cortex] (marker only) multi-agent coordination — primitives are always available; the extra marks intent
[coding] fastmcp>=2.0 MCP server + Claude Code lifecycle hooks
[compliance] cryptography>=42 Event store, retention sweeper, deletion certificate signer, HMAC + RS256 signing. FastAPI routes need [api] on top. See the Compliance guide.
[api] fastapi, uvicorn[standard], slowapi, sqlalchemy, prometheus-client, pydantic REST API server (semvec serve)
[benchmarks] sentence-transformers>=3.0, datasets>=2.14, psutil>=5.9 running the LongMemEval harness or other benchmarks
[mem0] mem0ai>=0.1, faiss-cpu>=1.7 head-to-head Mem0 comparison
[dev] ruff, mypy, pre-commit, pytest, httpx contributing

Embedder requirement

Semvec is embedder-agnostic and refuses silent hash-based fallbacks — you bring your own. Any object exposing get_embedding(text) → np.ndarray and get_dimension() → int works.

pip install sentence-transformers

Choose the embedder dimension carefully — Semvec's retrieval quality is bounded by what the embedder can separate. Measured on 80 mixed-domain notes:

Embedder dimension precision@3 usable for
all-MiniLM-L6-v2 384 66.67 % English-only, tight-domain prototypes only
paraphrase-multilingual-mpnet-base-v2 768 86.11 % German / multilingual mixed-domain (recommended)

The 384-dim MiniLM is the easy default but on multilingual or domain-mixed text it confuses generic terms (e.g. "filter" → coffee filter vs. data filter). For German content, mixed-domain corpora, or anything where you need ≥ 80 % precision@3, use multilingual mpnet 768 d minimum.

from sentence_transformers import SentenceTransformer

embedder = SentenceTransformer(
    "sentence-transformers/paraphrase-multilingual-mpnet-base-v2"
)

Choose your use case

You want to… Jump to
Compress conversation history for any LLM Token-reduced LLM context
Drop-in replacement for openai.chat.completions Drop-in chat proxy
Coordinate many agents (analyst + planner + critic …) Multi-agent coordination
Give Claude Code / Cursor persistent memory across sessions Coding-agent compaction
Run as a service, talk to it over HTTP REST API server
Process regulated data (GDPR, audit, retention) Compliance pack

Token-reduced LLM context

The single most-used path: produce a compact system-prompt block from any conversation, regardless of length.

from semvec import SemvecState, SemvecConfig
from semvec.token_reduction import SemvecStateSerializer

state = SemvecState(config=SemvecConfig(dimension=768))

for text, embedding in conversation:
    state.update(embedding, text)

serializer = SemvecStateSerializer()
context = serializer.serialize(state, query_text="what did we decide about auth?")

response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": context},
        {"role": "user",   "content": "what did we decide about auth?"},
    ],
)

Compared to raw history concatenation, the compressed context does not grow with conversation length — input cost converges to a constant. The serializer fits prior context into a 150–350-token block sized for a system prompt.

Lift retrieval quality with anchors and triggers

The passive ingest above gives you retrieval that already beats sliding-window concatenation. To bias retrieval toward known domains or specific cues, register anchors and resonance triggers:

from semvec import SemvecState, SemvecConfig

state = SemvecState(config=SemvecConfig(
    dimension=768,
    enable_topic_switch=True,
    auto_anchor_on_topic_switch=True,   # opt-in (default off)
))

# Anchors — bias retrieval toward your known domains.
for prototype in [
    "SAP Business One Service Layer OData REST API",
    "Python MCP Model Context Protocol Server",
    "italienische Kueche Kochen Pasta Pizza",
    "Kaffee Espresso Roesterei Brewing",
]:
    state.add_anchor(embed(prototype))

# Triggers — boost memories on a keyword OR vector match.
state.create_resonance_trigger(
    keyword="security review",
    embedding=embed("security audit threat model"),
    threshold=0.7,
)

for text, vec in conversation:
    state.update(vec, text)

# Retrieval is now anchor-biased: candidates aligned with one of
# your domain anchors win the tie-break against generic phrases.
top = state.memory.get_relevant_memories(embed("OData filter syntax"), top_k=3)

What each piece adds (measured on mpnet 768 d, 80 mixed German notes):

Variant precision@3
passive update() only 86.11 %
+ 4 domain anchors 91.67 % (+ 5.56 pp)
+ 4 resonance triggers 86.11 %
anchors + triggers 91.67 %

Without anchors, the retrieval boost is a no-op — flipping these features on costs nothing if you do not need them. Anchors and triggers compete for the same boost slot (max(...), not addition), so redundant signals do not double-count.

Tuning rule of thumb: keep anchor_retrieval_boost ≥ trigger_retrieval_boost, both in the [0.1, 0.6] range. Pushing either past 0.7 mostly stops moving the needle — spend your budget on better anchor prototypes or sharper trigger thresholds rather than dialling the boosts higher.


Drop-in chat proxy

SemvecChatProxy wraps any callable LLM behind compressed context and tracks both compressed and full-history token counts per turn:

from semvec.token_reduction import SemvecChatProxy, create_llm_client

llm = create_llm_client("openai")  # reads OPENAI_BASE_URL/MODEL/API_KEY from env
proxy = SemvecChatProxy(
    llm_call=llm,
    system_prompt="You are a helpful assistant.",
    embedding_service=my_embedder,
)

for question in ["summarise Q3", "compare with Q2", "biggest miss?"]:
    result = proxy.chat(question)
    print(f"turn {result.turn_number}: {result.response}")
    print(f"  compressed tokens: {result.tokens.compressed}")
    print(f"  full-history tokens: {result.tokens.full_history}")

print(proxy.get_summary())

Built-in clients: OpenAIClient (works with the OpenAI API and any compatible endpoint such as vLLM, LiteLLM, OpenRouter), OllamaClient. You can pass any callable (list[ChatMessage]) -> str.

Break-even is around ten turns. The compressed prompt carries a constant ~110-token header. For very short conversations (≤ 5 turns) plain history concatenation is cheaper; from ~10 turns onward the proxy undercuts naive concatenation, and the gap widens linearly with conversation length. Measured on a 48-turn run: ~76 % token reduction vs. full-history.


Multi-agent coordination

Run several agents (analyst, planner, critic, …) that share an aggregated view, vote on proposals, and exchange checksummed state vectors.

from semvec.cortex import SemvecAgentNetwork, AttentionAggregation

network = SemvecAgentNetwork(
    aggregation_strategy=AttentionAggregation(dimension=768),
    dimension=768,
)
network.add_local_instance("analyst")
network.add_local_instance("planner")

network.process_input("analyst", "quarterly revenue is up 23%")
network.process_input("planner", "we should redirect Q4 spend to retention")

state = network.get_network_state()
print(f"active agents: {state['active_instances']}/{state['total_instances']}")

# Pull per-agent feedback for the next turn (consensus-aware)
feedback = network.get_feedback_for_agent("analyst")

Aggregation strategies: WeightedAverageAggregation, AttentionAggregation. ConsensusEngine adds proposal voting with five levels (SIMPLE_MAJORITY, QUALIFIED_MAJORITY, UNANIMOUS, WEIGHTED_VOTE, ADAPTIVE_THRESHOLD); quorum is measured against the registered voter pool, not just votes-cast-so-far. StateVectorPacket round-trips bit-exactly via serialize()/deserialize() and verify_integrity() confirms byte equality.

See the Cortex API reference for the full surface.


Coding-agent compaction

Persistent memory across coding sessions for Claude Code, Cursor, Aider — code pointers, anti-resonance error patterns, structured handoff context.

from semvec.coding import CodingEngine

engine = CodingEngine(state_dir="~/.semvec/project-x", embedder=my_embedder)
engine.ingest_transcript("path/to/claude_code_session.jsonl")

context = engine.get_compacted_context(
    "implement password reset flow",
    invariants=["never log plaintext passwords"],
)

Multi-session memory via LiteralCache

Below the high-level CodingEngine, state.literal_cache is a structured memory of design decisions, error patterns, invariants, and per-checkpoint test results — anything you want to survive across sessions verbatim:

import semvec

state = semvec.SemvecState(semvec.SemvecConfig(dimension=768))
cache = state.literal_cache

cache.record_decision("Use mpnet 768d for German content", checkpoint=1)
cache.record_error_pattern(
    pattern="catastrophic recency bias on blocked-domain ingest",
    example="500-note 4-domain blocked sequence",
    fix="raise long_term_size and use tier weights 1.0/0.95/0.9",
    checkpoint=1,
)
cache.add_invariant("State must round-trip via to_dict/from_dict")
cache.record_test_results(
    checkpoint=1,
    passed_tests=["test_a", "test_b", "test_c"],
    failed_tests=[],
)

# Build the LLM hand-off context for the next session
ctx = cache.build_handoff_context(next_checkpoint=2)
# ### INVARIANTS — Do NOT break these:
# - State must round-trip via to_dict/from_dict
#
# ### Test Status (CP1: 100%, 3/3)
#
# ### Known Error Patterns
# - `catastrophic recency bias on blocked-domain ingest` (x1): raise long_term_size...
#
# ### Design Decisions
# - [CP1] Use mpnet 768d for German content

# Persist + restore — round-trip preserves decisions, error_patterns,
# invariants, test_history, code_structures.
blob = state.to_bytes()
restored = semvec.SemvecState.from_bytes(blob)
assert restored.literal_cache.build_handoff_context(2) == ctx

build_handoff_context() produces a Markdown block ready for the system prompt of the next session. See the Coding API reference for the full surface.

Claude Code integration (MCP + hooks)

Wire it directly into Claude Code via the bundled FastMCP server and two lifecycle hooks. Add to .claude/settings.json:

{
  "mcpServers": {
    "semvec": {
      "command": "python",
      "args": ["-m", "semvec.coding.mcp_server"],
      "env": {
        "SEMVEC_STATE_DIR": ".semvec",
        "SEMVEC_EMBED_MODEL": "all-MiniLM-L6-v2"
      }
    }
  },
  "hooks": {
    "PreCompact":  [{"command": "python -m semvec.coding.hooks.pre_compact",  "timeout": 30000}],
    "SessionStart":[{"command": "python -m semvec.coding.hooks.session_start", "timeout": 10000}]
  }
}

The MCP server exposes six tools — pss_get_context, pss_update, pss_check_anti_resonance, pss_register_code, pss_record_error, pss_save. FastMCP is installed automatically via the [coding] extra.

The same FastMCP server plugs into Cursor via .cursor/mcp.json plus a Cursor Rule that replaces Claude Code's lifecycle hooks. Full step-by-step in the Cursor guide.


REST API server

pip install "semvec[api]"

# Dev mode — anonymous community-tier auth, in-memory SQLite
SEMVEC_ALLOW_ANONYMOUS=1 semvec serve --host 0.0.0.0 --port 8080

# Production — license JWT required, Postgres-backed metadata
export SEMVEC_LICENSE_KEY="eyJhbGciOiJFZERTQSI..."
export DATABASE_URL="postgresql://user:pw@host/semvec"
semvec serve --host 0.0.0.0 --port 8080

Talk HTTP:

# Health check (no auth)
curl http://localhost:8080/v1/health

# Single turn
curl -X POST http://localhost:8080/v1/run \
  -H "Authorization: Bearer $SEMVEC_LICENSE_KEY" \
  -H "Content-Type: application/json" \
  -d '{"session_id": "demo", "query": "what was the Q3 miss?"}'

# Retrieve compressed context
curl "http://localhost:8080/v1/state/context?session_id=demo&top_k=5" \
  -H "Authorization: Bearer $SEMVEC_LICENSE_KEY"

Endpoint groups: sessions (CRUD + run/store/context), session-control (resonance triggers, anchors, isolation, export/import/verify), clusters, regions (consensus-driven realignment), global observer (anomaly detection across regions), network (state transfer, user partitioning, trust-based consensus), literal cache, Prometheus /metrics.

Auth is via Authorization: Bearer <jwt> or X-API-Key: <jwt> — same Ed25519-signed JWT as the in-process licensing system.

See the REST API reference for every endpoint and the CLI reference for semvec serve flags.


Persistence

state.to_dict() is a JSON-safe checkpoint with embedded SHA-256 checksum — best when the snapshot has to round-trip through systems that only speak JSON.

state.to_bytes(compress=True) is the compact binary equivalent (gzip-compressed JSON, magic header, SHA-256 corruption check) — best for cold-storage checkpoints. state.to_bytes(compress=False) is the speed-optimised variant: same byte footprint as JSON, but kept as a self-describing binary blob with corruption check — best for hot-path persistence. Both paths preserve the full state on round-trip:

  • the semantic state and its rolling histories
  • all three memory tiers
  • domain anchors and topic-switch history
  • the complete LiteralCache: entities, decisions, error patterns, invariants, test history, code structures

Restore with SemvecState.from_bytes(blob); the version byte distinguishes the two to_bytes modes automatically.

Practical sizing on mpnet 768 d:

Memories JSON to_bytes(compress=True) to_bytes(compress=False)
110 (small) 18 ms / 8.8 kB / memory 157 ms / 3.7 kB / memory 36 ms / 8.8 kB / memory
1 000 (extrapolated) ~ 0.2 s / 9 MB ~ 1.4 s / 3.7 MB ~ 0.3 s / 9 MB
100 000 ~ 17 s / 1.7 GB ~ 2.5 min / 400 MB ~ 30 s / 1.7 GB

Pick the variant by use case:

  • Cold-storage checkpoint (occasional, durability matters) → compress=True. ~ 2.4× smaller than JSON; pay the gzip cost once.
  • Hot-path persistence (every-turn or per-request) → compress=False. Same size as JSON, only ~ 1.9× slower than json.dumps, but kept as a self-describing binary blob with corruption check.

For very large footprints (> 100 k memories) wrap your own NPZ/Parquet around the embedding payload to save another factor.


Configuration & environment variables

Variable Default Used by
SEMVEC_LICENSE_KEY Pro/Enterprise gates; REST API auth
SEMVEC_ALLOW_ANONYMOUS unset REST API: bypass auth (dev only)
SEMVEC_STATE_DIR .semvec CodingEngine state persistence
SEMVEC_EMBED_MODEL all-MiniLM-L6-v2 MCP server / hooks default embedder (consider overriding to paraphrase-multilingual-mpnet-base-v2 for German/multilingual)
SEMVEC_EMBED_DEVICE cpu MCP server / hooks: cpu or cuda
DATABASE_URL sqlite:///semvec.db REST API persistence (also accepts postgresql://…)
METRICS_USER / METRICS_PASSWORD Basic Auth on Prometheus /metrics
OPENAI_BASE_URL, OPENAI_API_KEY, OPENAI_MODEL OpenAIClient
OLLAMA_BASE_URL, OLLAMA_MODEL http://localhost:11434, — OllamaClient

Error handling

import time
from semvec import RateLimitError, LicenseExpiredError, ConfigurationError

try:
    result = state.update(embedding, text)
except RateLimitError as e:
    # e.retry_after is a datetime.timedelta; e.upgrade_url is set
    time.sleep(e.retry_after.total_seconds())
    result = state.update(embedding, text)
except LicenseExpiredError as e:
    # Hard fail — re-import won't help. Renew at e.upgrade_url.
    logger.error("semvec license expired — renew at %s", e.upgrade_url)
    raise
except ConfigurationError as e:
    # Wrong dimension, missing embedder, malformed config, etc.
    raise

All Semvec exceptions inherit from SemvecError. License-related exceptions (RateLimitError, LicenseExpiredError, LicenseError) inherit from LicenseError → SemvecError.


Licensing

Three tiers; Community works without a key, Pro and Enterprise require a signed Ed25519 JWT:

Tier Rate limit Retrieval modes
Community (no key) 5 QPS sustained / 50 burst Base retrieval
Pro 200 / 2000 QPS Extended
Enterprise Unthrottled All

JWTs have a 30-day TTL. Expiry is a hard fail — the next gated call raises LicenseExpiredError with the renewal URL in the message. Rate-limit exhaustion raises RateLimitError with a retry_after (a datetime.timedelta) and the upgrade URL.

export SEMVEC_LICENSE_KEY="eyJhbGciOiJFZERTQSI..."

Limitations & non-goals

Honest list of what Semvec does not do:

  • Not a vector database. Long-term memory is bounded; if you need recall over a million documents, run a dedicated vector store and treat Semvec as a conversational compressor on top.
  • Not a drop-in for stateless completion. The whole point is persistent state; if you only do single-shot prompts, you do not need Semvec.
  • No silent embedder fallback. If you do not pass an embedder, methods that need one raise a descriptive RuntimeError. Intentional — silent hash fallbacks gave surprising failure modes in earlier iterations.
  • License gate is a licensing feature, not a hard security boundary. Use it to enforce subscription tiers, not to keep determined adversaries out.
  • No mobile / WASM build today. abi3-py310 Linux/macOS/Windows only.
  • REST API persistence is metadata-only. Hot semantic state lives in-memory per process; only session/cluster/member/region/audit metadata is persisted. Plan accordingly for restarts.

FAQ

Is this RAG? Not in the usual sense. RAG retrieves documents at query time. Semvec compresses the conversation itself into a fixed-size state. They compose well — many users run Semvec for conversational signal + a vector DB for document retrieval.

Does the state ever grow? No, the state vector itself is fixed-size. The associated memory tiers are bounded by configured capacities — when full, the lowest-scoring entry is evicted (not the oldest).

Can I run it offline / air-gapped? Yes for Community tier. Pro/Enterprise tiers verify Ed25519 JWT signatures locally — no network call to a license server at runtime. Contact support@versino.de for offline-issued JWTs with custom TTLs.

How fast is it? Per-turn update() is sub-millisecond on a recent x86_64 CPU at dimension 384, dominated by NumPy/Rust matrix ops, not Python overhead. The whole point of the Rust port was to keep the math out of the GIL.

Is the source available? Compiled wheels are public on PyPI; the Rust source is held closed. Source access for Enterprise terms — contact support@versino.de.

GPU support? Embedders run on whatever device you configure (cuda, mps, cpu); the Semvec core itself is CPU-only — the math is small enough that GPU offload would lose more in transfer than it gains.


Telemetry

Semvec sends one anonymous init ping per Python process — and nothing else. No heartbeat, no per-call event, no inference data, no licensing JWT contents. Default-on; opt out with SEMVEC_TELEMETRY=0.

The ping contains:

  • the semvec version
  • a pseudonymous machine identifier (no IP, no hostname)
  • OS, architecture, Python version

The full schema and retention policy are documented at https://www.semvec.io/privacy.

Variable Effect
(unset) Telemetry is on, one ping on first import, stderr notice prints once
SEMVEC_TELEMETRY=0 Telemetry is off, no ping, no notice
SEMVEC_TELEMETRY_QUIET=1 Keep telemetry on but silence the stderr notice
SEMVEC_TELEMETRY_ENDPOINT=https://your.host/init Route the ping to a self-hosted endpoint (air-gapped enterprise)

Support

  • Documentation: https://semvec-docs.pages.dev
  • Pricing & licensing: https://www.semvec.io
  • Pro / Enterprise support: support@versino.de (priority response)
  • Security disclosures: security@versino.de — please do not open public issues for vulnerabilities; coordinated disclosure with 48 h acknowledgement, fix-or-mitigation in 30 days for high-severity issues

License

Proprietary — all rights reserved. Commercial use requires a Pro or Enterprise license. The full license text ships inside the wheel as LICENSE; for procurement, see https://www.semvec.io.

Copyright © 2026 Michael Neuberger · Versino PsiOmega.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

semvec-0.4.4-cp310-abi3-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.10+Windows x86-64

semvec-0.4.4-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

semvec-0.4.4-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.3 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

semvec-0.4.4-cp310-abi3-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

semvec-0.4.4-cp310-abi3-macosx_10_12_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

File details

Details for the file semvec-0.4.4-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: semvec-0.4.4-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semvec-0.4.4-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 c0105dfe7fbe95f8de2879425963da9b6e7593100e8bbe064e2ab440f936b37b
MD5 87ef0333958d600973d7e8cb71ac8a5f
BLAKE2b-256 440131af5daf883bcc529c890e899716b668ba9bc20002b488364850e7f1ca60

See more details on using hashes here.

Provenance

The following attestation bundles were made for semvec-0.4.4-cp310-abi3-win_amd64.whl:

Publisher: release.yml on MichaelNeuberger/semvec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semvec-0.4.4-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for semvec-0.4.4-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7bbeaf4046a9f322c5802847632abb0a271b90c3dd22c816ec7612247403e11c
MD5 a60d98e4287a077810faa22c166c69fb
BLAKE2b-256 47f2c07348fddcf1760988f0c4a2ffa93cea03af21c82efa6b2931e003b8efa6

See more details on using hashes here.

Provenance

The following attestation bundles were made for semvec-0.4.4-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on MichaelNeuberger/semvec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semvec-0.4.4-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for semvec-0.4.4-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 38e85e3252bbb65be3299a6c34414c3e629da315f50aab2f4d9d14f00fc4daf3
MD5 342a3c7c01290c80bfd2f3005ea4a41a
BLAKE2b-256 09ff9554fc684c88b4c85a63a69b18fbd8444aba737bf62c439eb4bbbec73bad

See more details on using hashes here.

Provenance

The following attestation bundles were made for semvec-0.4.4-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl:

Publisher: release.yml on MichaelNeuberger/semvec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semvec-0.4.4-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for semvec-0.4.4-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 794667a964d472bff21d1046434eab315da1774ac017099486a1bd88b65b810a
MD5 ead320ed85d920b6842753948c59d156
BLAKE2b-256 8be3f655f28a1cd3caef91a6483b27a06150e165a406db789c461ee26a8ffc84

See more details on using hashes here.

Provenance

The following attestation bundles were made for semvec-0.4.4-cp310-abi3-macosx_11_0_arm64.whl:

Publisher: release.yml on MichaelNeuberger/semvec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semvec-0.4.4-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for semvec-0.4.4-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 bb08ffd34811e60fea8eadc45bcefd2f64382e15e113b89d56ecc070a9fdf241
MD5 82fddf9e0c1f1d0de88bed1cd395e8fd
BLAKE2b-256 326b17610321b698b6f69289a82db95c616dbbd3e3519c36346f5c85bbc44322

See more details on using hashes here.

Provenance

The following attestation bundles were made for semvec-0.4.4-cp310-abi3-macosx_10_12_x86_64.whl:

Publisher: release.yml on MichaelNeuberger/semvec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page