High-performance AI memory engine with Rust core
Project description
mem7
LLM-powered long-term memory engine — Rust core with multi-language bindings.
Deeply inspired by Mem0, mem7 reimplements the core memory pipeline in Rust and adds an Ebbinghaus forgetting curve — stale memories naturally decay over time while frequently recalled facts grow stronger, just like human memory.
mem7 extracts factual statements from conversations, deduplicates them against existing memories, and stores the results in vector + graph databases with full audit history.
Install
pip install mem7 # Python
npm install @mem7ai/mem7 # Node.js / TypeScript
cargo add mem7 # Rust
Architecture
Python / TypeScript / Rust API
│ PyO3 (sync + async) / napi-rs / native
▼
Rust Core (tokio async runtime)
├── mem7-llm — OpenAI-compatible LLM client
├── mem7-embedding — Embedding client (OpenAI-compatible / FastEmbed)
├── mem7-vector — Vector index (FlatIndex / Upstash)
├── mem7-graph — Graph store (FlatGraph / Kuzu / Neo4j)
├── mem7-history — SQLite audit trail
├── mem7-dedup — LLM-driven memory deduplication
├── mem7-reranker — Search reranking (Cohere / LLM-based)
├── mem7-telemetry — OpenTelemetry tracing (OTLP export)
└── mem7-store — Pipeline orchestrator (MemoryEngine)
Quick Start (Python — Sync)
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
config = MemoryConfig(
llm=LlmConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="qwen2.5:7b",
),
embedding=EmbeddingConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="mxbai-embed-large",
dims=1024,
),
)
m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = m.search("What sports does Alice play?", user_id="alice")
Quick Start (Python — Async)
import asyncio
from mem7 import AsyncMemory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
async def main():
config = MemoryConfig(
llm=LlmConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="qwen2.5:7b",
),
embedding=EmbeddingConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="mxbai-embed-large",
dims=1024,
),
)
m = await AsyncMemory.create(config=config)
await m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = await m.search("What sports does Alice play?", user_id="alice")
asyncio.run(main())
Quick Start (TypeScript)
import { MemoryEngine } from "@mem7ai/mem7";
const engine = await MemoryEngine.create(JSON.stringify({
llm: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "qwen2.5:7b" },
embedding: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "mxbai-embed-large", dims: 1024 },
}));
await engine.add([{ role: "user", content: "I love playing tennis and my coach is Sarah." }], "alice");
const results = await engine.search("What sports does Alice play?", "alice");
Supported Providers
mem7 uses a single OpenAI-compatible client for both LLM and Embedding, which covers any service that exposes the OpenAI API format. This includes most major providers out of the box.
LLMs
| Provider | Status | Notes |
|---|---|---|
| OpenAI | :white_check_mark: | Native support |
| Ollama | :white_check_mark: | Via OpenAI-compatible API |
| vLLM | :white_check_mark: | Via OpenAI-compatible API |
| Groq | :white_check_mark: | Via OpenAI-compatible API |
| Together | :white_check_mark: | Via OpenAI-compatible API |
| DeepSeek | :white_check_mark: | Via OpenAI-compatible API |
| xAI (Grok) | :white_check_mark: | Via OpenAI-compatible API |
| LM Studio | :white_check_mark: | Via OpenAI-compatible API |
| Azure OpenAI | :white_check_mark: | Via OpenAI-compatible API |
| Anthropic | :x: | Requires native SDK |
| Gemini | :x: | Requires native SDK |
| Vertex AI | :x: | Requires native SDK |
| AWS Bedrock | :x: | Requires native SDK |
| LiteLLM | :x: | Python proxy |
| Sarvam | :x: | Requires native SDK |
| LangChain | :x: | Python framework |
Embeddings
| Provider | Status | Notes |
|---|---|---|
| OpenAI | :white_check_mark: | Native support |
| Ollama | :white_check_mark: | Via OpenAI-compatible API |
| Together | :white_check_mark: | Via OpenAI-compatible API |
| LM Studio | :white_check_mark: | Via OpenAI-compatible API |
| Azure OpenAI | :white_check_mark: | Via OpenAI-compatible API |
| FastEmbed | :white_check_mark: | Local ONNX inference (feature flag fastembed) |
| Hugging Face | :x: | Requires native SDK |
| Gemini | :x: | Requires native SDK |
| Vertex AI | :x: | Requires native SDK |
| AWS Bedrock | :x: | Requires native SDK |
| LangChain | :x: | Python framework |
Vector Stores
| Provider | Status | Notes |
|---|---|---|
| In-memory (FlatIndex) | :white_check_mark: | Built-in, good for dev |
| Upstash Vector | :white_check_mark: | REST API, serverless |
| Qdrant | :x: | |
| Chroma | :x: | |
| pgvector | :x: | |
| Milvus | :x: | |
| Pinecone | :x: | |
| Redis | :x: | |
| Weaviate | :x: | |
| Elasticsearch | :x: | |
| OpenSearch | :x: | |
| FAISS | :x: | |
| MongoDB | :x: | |
| Supabase | :x: | |
| Azure AI Search | :x: | |
| Vertex AI Vector Search | :x: | |
| Databricks | :x: | |
| Cassandra | :x: | |
| S3 Vectors | :x: | |
| Baidu | :x: | |
| Neptune | :x: | |
| Valkey | :x: | |
| LangChain | :x: |
Rerankers
| Provider | Status | Notes |
|---|---|---|
| Cohere | :white_check_mark: | Cohere v2 rerank API |
| LLM-based | :white_check_mark: | Any OpenAI-compatible LLM |
| Jina AI | :x: | Planned |
| Cross-encoder | :x: | Planned |
Graph Stores
| Provider | Status | Notes |
|---|---|---|
| In-memory (FlatGraph) | :white_check_mark: | Built-in, good for dev/testing |
| Kuzu (embedded) | :white_check_mark: | Cypher-based, no server needed (feature flag kuzu) |
| Neo4j | :white_check_mark: | Production-grade, Bolt protocol |
| Memgraph | :x: | Planned |
| Amazon Neptune | :x: | Planned |
Language Bindings
| Language | Status |
|---|---|
| Python (sync + async) | :white_check_mark: PyPI: pip install mem7 |
| TypeScript / Node.js | :white_check_mark: npm: npm install @mem7ai/mem7 |
| Rust | :white_check_mark: crates.io: cargo add mem7 |
| Go | Planned |
Vector Store Backends
Built-in FlatIndex (default) — in-memory brute-force, good for development:
from mem7.config import VectorConfig
VectorConfig(provider="flat", dims=1024)
Upstash Vector — managed cloud vector database:
VectorConfig(
provider="upstash",
collection_name="my-namespace",
dims=1024,
upstash_url="https://your-index.upstash.io",
upstash_token="your-token",
)
Local Embedding (FastEmbed)
mem7 supports fully local embedding via FastEmbed (ONNX Runtime). No API calls needed — models are downloaded and run locally.
Requires the fastembed feature flag:
# Cargo.toml
mem7 = { version = "0.2", features = ["fastembed"] }
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
config = MemoryConfig(
llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:7b"),
embedding=EmbeddingConfig(
provider="fastembed",
model="AllMiniLML6V2", # or "BGEBaseENV15", "NomicEmbedTextV15", etc.
dims=384,
),
)
m = Memory(config=config) # model downloaded on first use
Supported models include AllMiniLML6V2, BGEBaseENV15, BGESmallENV15, NomicEmbedTextV1, MxbaiEmbedLargeV1, GTEBaseENV15, and their quantized variants.
Graph Memory (Dual-Path Recall)
When graph is configured, mem7 runs dual-path recall: vector search and graph search execute concurrently via tokio::join!, returning both factual memories and entity relations.
On add(), the engine extracts entities and relations from conversations using LLM (JSON mode) and stores them in the graph alongside the vector memories.
FlatGraph (in-memory, for development):
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig, GraphConfig
config = MemoryConfig(
llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:7b"),
embedding=EmbeddingConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="mxbai-embed-large", dims=1024),
graph=GraphConfig(provider="flat"),
)
m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = m.search("What sports does Alice play?", user_id="alice")
# results["memories"] -> vector search results
# results["relations"] -> graph relations (e.g. USER -[loves_playing]-> tennis)
Neo4j (production):
GraphConfig(
provider="neo4j",
neo4j_url="bolt://localhost:7687",
neo4j_username="neo4j",
neo4j_password="password",
)
Kuzu (embedded, requires kuzu feature flag):
GraphConfig(provider="kuzu", kuzu_db_path="./my_graph.kuzu")
The graph LLM can be configured separately (e.g. use a cheaper model for extraction):
GraphConfig(
provider="flat",
llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:3b"),
)
Memory Decay (Forgetting Curve)
mem7 implements an Ebbinghaus-inspired forgetting curve that deprioritizes stale memories over time while automatically strengthening memories that are frequently recalled — just like human memory.
When enabled, every memory carries two extra metadata fields: last_accessed_at (the last time it was written or retrieved) and access_count (how many times it has been retrieved). These are used to compute a retention score that modulates the raw similarity score during search and dedup:
$$S = S_0 \cdot \bigl(1 + \alpha \cdot \ln(1 + n)\bigr)$$
$$R(t) = \exp!\Bigl(-\Bigl(\frac{t - \tau}{S}\Bigr)^{!\gamma}\Bigr)$$
$$\widetilde{R}(t) = \rho + (1 - \rho) \cdot R(t)$$
$$\text{score}{\text{final}} = \text{sim}{\text{raw}} \times \widetilde{R}(t)$$
where $S_0$ = base half-life, $\alpha$ = rehearsal factor, $n$ = access count, $\tau$ = last accessed time, $\gamma$ = decay shape, $\rho$ = min retention floor.
- Decay over time: memories you haven't touched in weeks get deprioritized, but never disappear (the
floorparameter ensures a minimum retention of 10% by default). - Rehearsal strengthening: each time a memory is successfully retrieved via
search(), itsaccess_countis incremented andlast_accessed_atis reset asynchronously — making it harder to forget next time. - Cue-dependent retrieval: a highly relevant query naturally "wakes up" old memories because
raw_similarityis high, even if the retention score is low. No separate sigmoid gate is needed — the multiplicative structure handles it. - Write-path aware: decay is also applied during the dedup phase of
add(), so stale memories appear less "close" to new facts and are more likely to be updated or replaced.
Enabling Decay
Decay is off by default. Enable it via config:
Python:
from mem7.config import MemoryConfig, DecayConfig
config = MemoryConfig(
# ... llm, embedding, etc.
decay=DecayConfig(enabled=True),
)
TypeScript:
const engine = await MemoryEngine.create(JSON.stringify({
// ... llm, embedding, etc.
decay: { enabled: true },
}));
Rust:
use mem7_config::{MemoryEngineConfig, DecayConfig};
let config = MemoryEngineConfig {
decay: Some(DecayConfig { enabled: true, ..Default::default() }),
..Default::default()
};
Tuning Parameters
| Parameter | Default | Description |
|---|---|---|
base_half_life_secs |
604800.0 |
Base stability in seconds (7 days) before any rehearsal bonus |
decay_shape |
0.8 |
Stretched-exponential shape (0 < gamma <= 1); lower = slower initial decay |
min_retention |
0.1 |
Floor so no memory fully vanishes |
rehearsal_factor |
0.5 |
How much each retrieval increases stability |
Backward Compatibility
- Old memories without
last_accessed_atoraccess_countgracefully degrade: age falls back toupdated_atthencreated_at, and access count defaults to 0. - No migration needed — new fields are written on the next
add()orupdate()call. - When decay is disabled (the default), scoring behavior is identical to previous versions.
OpenClaw Plugin
mem7 ships an official OpenClaw memory plugin that replaces the built-in memory backend with LLM-powered fact extraction, graph relations, dedup, and the forgetting curve — all driven by mem7's Rust core.
Install
openclaw plugins install @mem7ai/openclaw-mem7
Activate
In ~/.openclaw/openclaw.json:
{
"plugins": {
"slots": { "memory": "openclaw-mem7" },
"entries": {
"openclaw-mem7": {
"enabled": true,
"config": {
"llm": { "base_url": "http://localhost:11434/v1", "api_key": "ollama", "model": "qwen2.5:7b" },
"embedding": { "base_url": "http://localhost:11434/v1", "api_key": "ollama", "model": "mxbai-embed-large", "dims": 1024 },
"graph": { "provider": "flat" },
"decay": { "enabled": true }
}
}
}
}
}
What it does
- Auto-recall (
before_prompt_build): before each agent turn, the plugin searches mem7 for relevant memories and injects them into the system prompt. - Auto-capture (
agent_end): after each turn, the user + assistant messages are sent through mem7's fact extraction pipeline, automatically storing new facts and deduplicating against existing ones. - Tools: the plugin registers
memory_search,memory_get, andmemory_storetools that the agent can call explicitly. - Forgetting curve: decay is enabled by default so stale facts naturally fade, while frequently recalled memories stay strong.
See packages/openclaw-mem7/ for full documentation.
Observability (OpenTelemetry)
mem7 integrates with OpenTelemetry via tracing-opentelemetry. When enabled, every add(), search(), get(), update(), delete() call emits a trace span that is exported via OTLP/gRPC to any compatible collector (Jaeger, Grafana Tempo, Datadog, etc.).
Python:
from mem7 import Memory, init_telemetry, shutdown_telemetry
init_telemetry(otlp_endpoint="http://localhost:4317", service_name="my-app")
m = Memory(config=config)
m.add("I love playing tennis.", user_id="alice")
# spans are exported automatically
shutdown_telemetry() # flush before exit
TypeScript:
import { MemoryEngine, initTelemetry, shutdownTelemetry } from "@mem7ai/mem7";
initTelemetry(JSON.stringify({ otlp_endpoint: "http://localhost:4317", service_name: "my-app" }));
const engine = await MemoryEngine.create(configJson);
await engine.add([{ role: "user", content: "I love tennis." }], "alice");
shutdownTelemetry();
Rust (requires otel feature):
// Cargo.toml: mem7 = { version = "0.2", features = ["otel"] }
use mem7::{TelemetryConfig, telemetry};
telemetry::init(&TelemetryConfig::default())?;
// ... use MemoryEngine as usual ...
telemetry::shutdown();
Examples
See the [examples/](examples/) directory:
[mem7_demo.ipynb](examples/mem7_demo.ipynb)— Python notebook demo[mem7_demo.ts](examples/mem7_demo.ts)— TypeScript demo
Development
Prerequisites
- Rust 1.85+ (stable)
- Python 3.10+
- Node.js 22+
- maturin
Build
python -m venv .venv && source .venv/bin/activate
pip install maturin pydantic
# Development build (debug, fast iteration)
maturin develop
# Release build
maturin develop --release
Test
# Rust tests
cargo test --workspace
# Clippy
cargo clippy --workspace --all-targets -- -D warnings
License
Apache-2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mem7-0.3.1.tar.gz.
File metadata
- Download URL: mem7-0.3.1.tar.gz
- Upload date:
- Size: 107.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b733d6f802d8c1283f83909ea4800470f94ada7e37e6cb0d259978ee558d6116
|
|
| MD5 |
ed8d86148652eeef60e531a43eb32cb9
|
|
| BLAKE2b-256 |
89aa05400a6651b16d601a8ad5edbe4ab8b5e89e5da94eac7277ed9674250cbf
|
Provenance
The following attestation bundles were made for mem7-0.3.1.tar.gz:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.1.tar.gz -
Subject digest:
b733d6f802d8c1283f83909ea4800470f94ada7e37e6cb0d259978ee558d6116 - Sigstore transparency entry: 1115038612
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.1-cp310-abi3-win_amd64.whl.
File metadata
- Download URL: mem7-0.3.1-cp310-abi3-win_amd64.whl
- Upload date:
- Size: 5.2 MB
- Tags: CPython 3.10+, Windows x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1ca0611382ab7898b78706bd58a8a563ee63e144402c4baba89d8e399113df0d
|
|
| MD5 |
4c92bc647abff29a87f014fb615e4ede
|
|
| BLAKE2b-256 |
006d07d535cf5cb2e7b56dfad01bcbb54ddcb92a3cdc12d999c0cb713a4f6995
|
Provenance
The following attestation bundles were made for mem7-0.3.1-cp310-abi3-win_amd64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.1-cp310-abi3-win_amd64.whl -
Subject digest:
1ca0611382ab7898b78706bd58a8a563ee63e144402c4baba89d8e399113df0d - Sigstore transparency entry: 1115038649
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.1-cp310-abi3-manylinux_2_24_aarch64.whl.
File metadata
- Download URL: mem7-0.3.1-cp310-abi3-manylinux_2_24_aarch64.whl
- Upload date:
- Size: 6.1 MB
- Tags: CPython 3.10+, manylinux: glibc 2.24+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ac1be9336f40e18b75304b0c0610eb998ea8a8d9df23e01c75dfefdd72f33cb2
|
|
| MD5 |
088d07725d10822e03e4dd90e1fe0c9c
|
|
| BLAKE2b-256 |
a603f0af26248485f9a302333beca7bf2901d67eb2224a0ca585879346209ea9
|
Provenance
The following attestation bundles were made for mem7-0.3.1-cp310-abi3-manylinux_2_24_aarch64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.1-cp310-abi3-manylinux_2_24_aarch64.whl -
Subject digest:
ac1be9336f40e18b75304b0c0610eb998ea8a8d9df23e01c75dfefdd72f33cb2 - Sigstore transparency entry: 1115038631
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.1-cp310-abi3-macosx_11_0_arm64.whl.
File metadata
- Download URL: mem7-0.3.1-cp310-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 5.6 MB
- Tags: CPython 3.10+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
013566e07fb33308bdaac923571048a11a339d3c4a4619e2541584c72987730f
|
|
| MD5 |
68bbb127c1971c0a61ebcb58fa4ec75b
|
|
| BLAKE2b-256 |
8c03be435e4ccb0743dcdbda5d798412e93b9e3e7216dc86c8fa7d340e0e5aad
|
Provenance
The following attestation bundles were made for mem7-0.3.1-cp310-abi3-macosx_11_0_arm64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.1-cp310-abi3-macosx_11_0_arm64.whl -
Subject digest:
013566e07fb33308bdaac923571048a11a339d3c4a4619e2541584c72987730f - Sigstore transparency entry: 1115038683
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.1-cp310-abi3-macosx_10_12_x86_64.whl.
File metadata
- Download URL: mem7-0.3.1-cp310-abi3-macosx_10_12_x86_64.whl
- Upload date:
- Size: 5.8 MB
- Tags: CPython 3.10+, macOS 10.12+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f43db6e804a18ff6e18eb56bce68f63a35ee62396156ac0b440084888514f09e
|
|
| MD5 |
225ab5209dbc04fe0833906a700ed863
|
|
| BLAKE2b-256 |
3641fd1967b999699ae4f042a92399aecb8f1de0e982a70fabd43a1d61347867
|
Provenance
The following attestation bundles were made for mem7-0.3.1-cp310-abi3-macosx_10_12_x86_64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.1-cp310-abi3-macosx_10_12_x86_64.whl -
Subject digest:
f43db6e804a18ff6e18eb56bce68f63a35ee62396156ac0b440084888514f09e - Sigstore transparency entry: 1115038673
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: mem7-0.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 6.2 MB
- Tags: CPython 3.8, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
637e6031df513de8a799d7f55babbb8bb99fa30e35b0dcadec55add9eaca02c7
|
|
| MD5 |
2db3fe31ecbb6ac1a62966b676dbbe9b
|
|
| BLAKE2b-256 |
5ebb2b1eb5e7d9d30680ea4f593cd52bb894ab2499290aab0e26f154f2568428
|
Provenance
The following attestation bundles were made for mem7-0.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl -
Subject digest:
637e6031df513de8a799d7f55babbb8bb99fa30e35b0dcadec55add9eaca02c7 - Sigstore transparency entry: 1115038697
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c63f073aa9b3263c6fbf684e2a463d4bf919ef88 -
Trigger Event:
push
-
Statement type: