High-performance AI memory engine with Rust core
Project description
mem7
LLM-powered long-term memory engine — Rust core with multi-language bindings.
Deeply inspired by Mem0, mem7 reimplements the core memory pipeline in Rust and goes further with two capabilities Mem0 doesn't have:
- Ebbinghaus forgetting curve — stale memories naturally decay over time while frequently recalled facts grow stronger, just like human memory.
- Session-aware recall — each memory is typed (factual / preference / procedural / episodic) and each query is auto-classified by task intent, so irrelevant memories (e.g. design preferences during bug-fixing) are demoted before they reach the agent.
mem7 extracts factual statements from conversations, deduplicates them against existing memories, and stores the results in vector + graph databases with full audit history.
Install
pip install mem7 # Python
npm install @mem7ai/mem7 # Node.js / TypeScript
cargo add mem7 # Rust
Architecture
Python / TypeScript / Rust API
│ PyO3 (sync + async) / napi-rs / native
▼
Rust Core (tokio async runtime)
├── mem7-llm — OpenAI-compatible LLM client
├── mem7-embedding — Embedding client (OpenAI-compatible / FastEmbed)
├── mem7-vector — Vector index (FlatIndex / Upstash)
├── mem7-graph — Graph store (FlatGraph / Kuzu / Neo4j)
├── mem7-history — SQLite audit trail
├── mem7-dedup — LLM-driven memory deduplication
├── mem7-reranker — Search reranking (Cohere / LLM-based)
├── mem7-telemetry — OpenTelemetry tracing (OTLP export)
└── mem7-store — Pipeline orchestrator (MemoryEngine)
Write Path — add()
flowchart LR
A[Conversation] --> B["LLM: extract facts\n+ memory_type"]
A --> C["LLM: extract\ngraph relations"]
B --> D[Embed facts]
D --> E["Search existing\nmemories"]
E --> F["LLM: dedup\n(ADD / UPDATE / DELETE)"]
F --> G[(Vector Index)]
C --> H[(Graph Store)]
F --> I[(SQLite History)]
Read Path — search()
flowchart LR
Q[Query] --> E[Embed query]
Q --> CL["LLM: classify\ntask_type"]
E --> V["Vector search"]
E --> G["Graph search"]
V --> RR["Rerank\n(optional)"]
RR --> DC["× decay"]
DC --> CT["× context_coeff\n(memory_type, task_type)"]
CL -.-> CT
G --> CT
CT --> TH["Threshold\nfilter"]
TH --> R[Ranked results]
Quick Start (Python — Sync)
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
config = MemoryConfig(
llm=LlmConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="qwen2.5:7b",
),
embedding=EmbeddingConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="mxbai-embed-large",
dims=1024,
),
)
m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = m.search("What sports does Alice play?", user_id="alice")
Quick Start (Python — Async)
import asyncio
from mem7 import AsyncMemory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
async def main():
config = MemoryConfig(
llm=LlmConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="qwen2.5:7b",
),
embedding=EmbeddingConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="mxbai-embed-large",
dims=1024,
),
)
m = await AsyncMemory.create(config=config)
await m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = await m.search("What sports does Alice play?", user_id="alice")
asyncio.run(main())
Quick Start (TypeScript)
import { MemoryEngine } from "@mem7ai/mem7";
const engine = await MemoryEngine.create(JSON.stringify({
llm: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "qwen2.5:7b" },
embedding: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "mxbai-embed-large", dims: 1024 },
}));
await engine.add([{ role: "user", content: "I love playing tennis and my coach is Sarah." }], "alice");
const results = await engine.search("What sports does Alice play?", "alice");
Supported Providers
mem7 uses a single OpenAI-compatible client for both LLM and Embedding, which covers any service that exposes the OpenAI API format. This includes most major providers out of the box.
LLMs
| Provider | Status | Notes |
|---|---|---|
| OpenAI | :white_check_mark: | Native support |
| Ollama | :white_check_mark: | Via OpenAI-compatible API |
| vLLM | :white_check_mark: | Via OpenAI-compatible API |
| Groq | :white_check_mark: | Via OpenAI-compatible API |
| Together | :white_check_mark: | Via OpenAI-compatible API |
| DeepSeek | :white_check_mark: | Via OpenAI-compatible API |
| xAI (Grok) | :white_check_mark: | Via OpenAI-compatible API |
| LM Studio | :white_check_mark: | Via OpenAI-compatible API |
| Azure OpenAI | :white_check_mark: | Via OpenAI-compatible API |
| Anthropic | :x: | Requires native SDK |
| Gemini | :x: | Requires native SDK |
| Vertex AI | :x: | Requires native SDK |
| AWS Bedrock | :x: | Requires native SDK |
| LiteLLM | :x: | Python proxy |
| Sarvam | :x: | Requires native SDK |
| LangChain | :x: | Python framework |
Embeddings
| Provider | Status | Notes |
|---|---|---|
| OpenAI | :white_check_mark: | Native support |
| Ollama | :white_check_mark: | Via OpenAI-compatible API |
| Together | :white_check_mark: | Via OpenAI-compatible API |
| LM Studio | :white_check_mark: | Via OpenAI-compatible API |
| Azure OpenAI | :white_check_mark: | Via OpenAI-compatible API |
| FastEmbed | :white_check_mark: | Local ONNX inference (feature flag fastembed) |
| Hugging Face | :x: | Requires native SDK |
| Gemini | :x: | Requires native SDK |
| Vertex AI | :x: | Requires native SDK |
| AWS Bedrock | :x: | Requires native SDK |
| LangChain | :x: | Python framework |
Vector Stores
| Provider | Status | Notes |
|---|---|---|
| In-memory (FlatIndex) | :white_check_mark: | Built-in, good for dev |
| Upstash Vector | :white_check_mark: | REST API, serverless |
| Qdrant | :x: | |
| Chroma | :x: | |
| pgvector | :x: | |
| Milvus | :x: | |
| Pinecone | :x: | |
| Redis | :x: | |
| Weaviate | :x: | |
| Elasticsearch | :x: | |
| OpenSearch | :x: | |
| FAISS | :x: | |
| MongoDB | :x: | |
| Supabase | :x: | |
| Azure AI Search | :x: | |
| Vertex AI Vector Search | :x: | |
| Databricks | :x: | |
| Cassandra | :x: | |
| S3 Vectors | :x: | |
| Baidu | :x: | |
| Neptune | :x: | |
| Valkey | :x: | |
| LangChain | :x: |
Rerankers
| Provider | Status | Notes |
|---|---|---|
| Cohere | :white_check_mark: | Cohere v2 rerank API |
| LLM-based | :white_check_mark: | Any OpenAI-compatible LLM |
| Jina AI | :x: | Planned |
| Cross-encoder | :x: | Planned |
Graph Stores
| Provider | Status | Notes |
|---|---|---|
| In-memory (FlatGraph) | :white_check_mark: | Built-in, good for dev/testing |
| Kuzu (embedded) | :white_check_mark: | Cypher-based, no server needed (feature flag kuzu) |
| Neo4j | :white_check_mark: | Production-grade, Bolt protocol |
| Memgraph | :x: | Planned |
| Amazon Neptune | :x: | Planned |
Language Bindings
| Language | Status |
|---|---|
| Python (sync + async) | :white_check_mark: PyPI: pip install mem7 |
| TypeScript / Node.js | :white_check_mark: npm: npm install @mem7ai/mem7 |
| Rust | :white_check_mark: crates.io: cargo add mem7 |
| Go | Planned |
Vector Store Backends
Built-in FlatIndex (default) — in-memory brute-force, good for development:
from mem7.config import VectorConfig
VectorConfig(provider="flat", dims=1024)
Upstash Vector — managed cloud vector database:
VectorConfig(
provider="upstash",
collection_name="my-namespace",
dims=1024,
upstash_url="https://your-index.upstash.io",
upstash_token="your-token",
)
Local Embedding (FastEmbed)
mem7 supports fully local embedding via FastEmbed (ONNX Runtime). No API calls needed — models are downloaded and run locally.
Requires the fastembed feature flag:
# Cargo.toml
mem7 = { version = "0.3.3", features = ["fastembed"] }
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
config = MemoryConfig(
llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:7b"),
embedding=EmbeddingConfig(
provider="fastembed",
model="AllMiniLML6V2", # or "BGEBaseENV15", "NomicEmbedTextV15", etc.
dims=384,
),
)
m = Memory(config=config) # model downloaded on first use
Supported models include AllMiniLML6V2, BGEBaseENV15, BGESmallENV15, NomicEmbedTextV1, MxbaiEmbedLargeV1, GTEBaseENV15, and their quantized variants.
Graph Memory (Dual-Path Recall)
When graph is configured, mem7 runs dual-path recall: vector search and graph search execute concurrently via tokio::join!, returning both factual memories and entity relations.
On add(), the engine extracts entities and relations from conversations using LLM (JSON mode) and stores them in the graph alongside the vector memories.
FlatGraph (in-memory, for development):
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig, GraphConfig
config = MemoryConfig(
llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:7b"),
embedding=EmbeddingConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="mxbai-embed-large", dims=1024),
graph=GraphConfig(provider="flat"),
)
m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = m.search("What sports does Alice play?", user_id="alice")
# results["memories"] -> vector search results
# results["relations"] -> graph relations (e.g. USER -[loves_playing]-> tennis)
Neo4j (production):
GraphConfig(
provider="neo4j",
neo4j_url="bolt://localhost:7687",
neo4j_username="neo4j",
neo4j_password="password",
)
Kuzu (embedded, requires kuzu feature flag):
GraphConfig(provider="kuzu", kuzu_db_path="./my_graph.kuzu")
The graph LLM can be configured separately (e.g. use a cheaper model for extraction):
GraphConfig(
provider="flat",
llm=LlmConfig(base_url="http://localhost:11434/v1", api_key="ollama", model="qwen2.5:3b"),
)
Memory Decay (Forgetting Curve)
mem7 implements an Ebbinghaus-inspired forgetting curve that deprioritizes stale memories over time while automatically strengthening memories that are frequently recalled — just like human memory.
When enabled, every memory carries two extra metadata fields: last_accessed_at (the last time it was written or retrieved) and access_count (how many times it has been retrieved). These are used to compute a retention score that modulates the raw similarity score during search and dedup:
$$S = S_0 \cdot \bigl(1 + \alpha \cdot \ln(1 + n)\bigr)$$
$$R(t) = \exp!\Bigl(-\Bigl(\frac{t - \tau}{S}\Bigr)^{!\gamma}\Bigr)$$
$$\widetilde{R}(t) = \rho + (1 - \rho) \cdot R(t)$$
$$\text{score}{\text{final}} = \text{sim}{\text{raw}} \times \widetilde{R}(t)$$
where $S_0$ = base half-life, $\alpha$ = rehearsal factor, $n$ = access count, $\tau$ = last accessed time, $\gamma$ = decay shape, $\rho$ = min retention floor.
- Decay over time: memories you haven't touched in weeks get deprioritized, but never disappear (the
floorparameter ensures a minimum retention of 10% by default). - Rehearsal strengthening: each time a memory is successfully retrieved via
search(), itsaccess_countis incremented andlast_accessed_atis reset asynchronously — making it harder to forget next time. - Cue-dependent retrieval: a highly relevant query naturally "wakes up" old memories because
raw_similarityis high, even if the retention score is low. No separate sigmoid gate is needed — the multiplicative structure handles it. - Write-path aware: decay is also applied during the dedup phase of
add(), so stale memories appear less "close" to new facts and are more likely to be updated or replaced.
Enabling Decay
Decay is off by default. Enable it via config:
Python:
from mem7.config import MemoryConfig, DecayConfig
config = MemoryConfig(
# ... llm, embedding, etc.
decay=DecayConfig(enabled=True),
)
TypeScript:
const engine = await MemoryEngine.create(JSON.stringify({
// ... llm, embedding, etc.
decay: { enabled: true },
}));
Rust:
use mem7_config::{MemoryEngineConfig, DecayConfig};
let config = MemoryEngineConfig {
decay: Some(DecayConfig { enabled: true, ..Default::default() }),
..Default::default()
};
Tuning Parameters
| Parameter | Default | Description |
|---|---|---|
base_half_life_secs |
604800.0 |
Base stability in seconds (7 days) before any rehearsal bonus |
decay_shape |
0.8 |
Stretched-exponential shape (0 < gamma <= 1); lower = slower initial decay |
min_retention |
0.1 |
Floor so no memory fully vanishes |
rehearsal_factor |
0.5 |
How much each retrieval increases stability |
Backward Compatibility
- Old memories without
last_accessed_atoraccess_countgracefully degrade: age falls back toupdated_atthencreated_at, and access count defaults to 0. - No migration needed — new fields are written on the next
add()orupdate()call. - When decay is disabled (the default), scoring behavior is identical to previous versions.
Context-Aware Scoring (Session-Aware Recall)
Pure embedding similarity can conflate semantic closeness with contextual relevance — for example, a design preference like "always investigate root cause first" may score high when searching "fix Chrome CDP bug" because both relate to debugging. With context-aware scoring, mem7 automatically classifies queries and memories to boost what's relevant and demote what isn't.
How It Works
- Write path — each extracted fact is tagged with a
memory_type(factual, preference, procedural, episodic) during LLM fact extraction. - Read path — each search query is classified into a
task_type(troubleshooting, design, factual_lookup, planning, general) via a lightweight LLM call that runs in parallel with embedding, adding zero sequential latency. - A context coefficient is looked up from a
(memory_type, task_type)weight matrix and multiplied into the score:
$$\text{score}_{\text{final}} = \text{similarity} \times \text{decay} \times \text{context coeff}$$
Default Weight Matrix
| troubleshooting | design | factual_lookup | planning | general | |
|---|---|---|---|---|---|
| factual | 1.0 | 0.5 | 1.0 | 0.7 | 1.0 |
| preference | 0.3 | 1.0 | 0.3 | 0.8 | 0.8 |
| procedural | 0.8 | 0.5 | 0.5 | 1.0 | 0.7 |
| episodic | 0.5 | 0.5 | 0.5 | 0.5 | 0.7 |
Enabling Context-Aware Scoring
Context scoring is off by default. Enable it via config:
Python:
from mem7.config import MemoryConfig, ContextConfig
config = MemoryConfig(
# ... llm, embedding, etc.
context=ContextConfig(enabled=True),
)
TypeScript:
const engine = await MemoryEngine.create(JSON.stringify({
// ... llm, embedding, etc.
context: { enabled: true },
}));
Rust:
use mem7_config::{MemoryEngineConfig, ContextConfig};
let config = MemoryEngineConfig {
context: Some(ContextConfig { enabled: true, ..Default::default() }),
..Default::default()
};
You can also provide custom weights to override the defaults:
ContextConfig(
enabled=True,
weights={
"preference": {"troubleshooting": 0.1, "design": 1.0},
},
)
Overriding Task Type
If the caller already knows the task context, it can pass task_type directly to skip the LLM classification call:
results = m.search("fix Chrome CDP timeout", user_id="alice", task_type="troubleshooting")
Backward Compatibility
- Context scoring defaults to disabled — zero impact on existing users.
- Old memories without
memory_typeare treated as"factual"(safe default). - When context is disabled, the scoring pipeline is identical to previous versions.
OpenClaw Plugin
mem7 ships an official OpenClaw memory plugin that replaces the built-in memory backend with LLM-powered fact extraction, graph relations, dedup, and the forgetting curve — all driven by mem7's Rust core.
Install
openclaw plugins install @mem7ai/openclaw-mem7
Activate
In ~/.openclaw/openclaw.json:
{
"plugins": {
"slots": { "memory": "openclaw-mem7" },
"entries": {
"openclaw-mem7": {
"enabled": true,
"config": {
"llm": { "base_url": "http://localhost:11434/v1", "api_key": "ollama", "model": "qwen2.5:7b" },
"embedding": { "base_url": "http://localhost:11434/v1", "api_key": "ollama", "model": "mxbai-embed-large", "dims": 1024 },
"graph": { "provider": "flat" },
"decay": { "enabled": true }
}
}
}
}
}
What it does
- Auto-recall (
before_prompt_build/before_agent_start): before each agent turn, the plugin searches both session and long-term scopes, merges the results, and injects them into the system prompt. - Auto-capture (
agent_end): after each turn, the user + assistant messages are sent through mem7's fact extraction pipeline, automatically storing new facts and deduplicating against existing ones. - Tools: the plugin registers
memory_search,memory_get,memory_list,memory_store, andmemory_forgetfor explicit memory operations. - Scope model: tools support
session,long-term, and mergedallreads, withsessionKeyautomatically mapped ontorunIdand optionalagentId. - Forgetting curve: decay is enabled by default so stale facts naturally fade, while frequently recalled memories stay strong.
See packages/openclaw-mem7/ for full documentation.
Observability (OpenTelemetry)
mem7 integrates with OpenTelemetry via tracing-opentelemetry. When enabled, every add(), search(), get(), update(), delete() call emits a trace span that is exported via OTLP/gRPC to any compatible collector (Jaeger, Grafana Tempo, Datadog, etc.).
Python:
from mem7 import Memory, init_telemetry, shutdown_telemetry
init_telemetry(otlp_endpoint="http://localhost:4317", service_name="my-app")
m = Memory(config=config)
m.add("I love playing tennis.", user_id="alice")
# spans are exported automatically
shutdown_telemetry() # flush before exit
TypeScript:
import { MemoryEngine, initTelemetry, shutdownTelemetry } from "@mem7ai/mem7";
initTelemetry(JSON.stringify({ otlp_endpoint: "http://localhost:4317", service_name: "my-app" }));
const engine = await MemoryEngine.create(configJson);
await engine.add([{ role: "user", content: "I love tennis." }], "alice");
shutdownTelemetry();
Rust (requires otel feature):
// Cargo.toml: mem7 = { version = "0.3.3", features = ["otel"] }
use mem7::{TelemetryConfig, telemetry};
telemetry::init(&TelemetryConfig::default())?;
// ... use MemoryEngine as usual ...
telemetry::shutdown();
Examples
See the examples/ directory:
- mem7_demo.ipynb — Python notebook demo
- mem7_demo.ts — TypeScript demo
Development
Prerequisites
Build
python -m venv .venv && source .venv/bin/activate
pip install maturin pydantic
# Development build (debug, fast iteration)
just dev
# Release build
just build
# OpenClaw plugin build
just openclaw-build
Test
# Full validation suite
just check
# Common individual tasks
just fmt
just fmt-check
just clippy
just lint
just typecheck
just test
License
Apache-2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mem7-0.3.3.tar.gz.
File metadata
- Download URL: mem7-0.3.3.tar.gz
- Upload date:
- Size: 119.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5d2d20acf51b38c7d78457bbf34435bf981370b214be2a2fe76d5e8efa171bfa
|
|
| MD5 |
5b395434d4807fa18045baecc9d48fae
|
|
| BLAKE2b-256 |
8aace05e3f2d83c23fd6fc3b8af9cf305c3ab7a7bc1aaf47797926e292b70eed
|
Provenance
The following attestation bundles were made for mem7-0.3.3.tar.gz:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.3.tar.gz -
Subject digest:
5d2d20acf51b38c7d78457bbf34435bf981370b214be2a2fe76d5e8efa171bfa - Sigstore transparency entry: 1186124039
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@389b6802d570190b3130c61267e99fe2019d8e2e -
Branch / Tag:
refs/tags/v0.3.3 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@389b6802d570190b3130c61267e99fe2019d8e2e -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.3-cp310-abi3-win_amd64.whl.
File metadata
- Download URL: mem7-0.3.3-cp310-abi3-win_amd64.whl
- Upload date:
- Size: 5.3 MB
- Tags: CPython 3.10+, Windows x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d102dc48ba41625f78874a5d8bb3b4adafc8b42aa2006b918ec326f54a324d94
|
|
| MD5 |
df2c9c581d81ccc6bf625c2edd859ba4
|
|
| BLAKE2b-256 |
7d2b02b6f6bac5f7d765e235ab9025db6ffc1e5643b561adf2e59645164b0c18
|
Provenance
The following attestation bundles were made for mem7-0.3.3-cp310-abi3-win_amd64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.3-cp310-abi3-win_amd64.whl -
Subject digest:
d102dc48ba41625f78874a5d8bb3b4adafc8b42aa2006b918ec326f54a324d94 - Sigstore transparency entry: 1186124174
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@389b6802d570190b3130c61267e99fe2019d8e2e -
Branch / Tag:
refs/tags/v0.3.3 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@389b6802d570190b3130c61267e99fe2019d8e2e -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.3-cp310-abi3-manylinux_2_24_aarch64.whl.
File metadata
- Download URL: mem7-0.3.3-cp310-abi3-manylinux_2_24_aarch64.whl
- Upload date:
- Size: 6.2 MB
- Tags: CPython 3.10+, manylinux: glibc 2.24+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68467cb84dbc5228726b290f2c1443eeceaec3f47c459d755f093326bff2fae2
|
|
| MD5 |
815e8c1226c8f9b66f50f5314f2ba0bd
|
|
| BLAKE2b-256 |
73cbca56a0d9c17c47e607cfb41ced4acc3f397b0b0bdc896009ce28d41a114a
|
Provenance
The following attestation bundles were made for mem7-0.3.3-cp310-abi3-manylinux_2_24_aarch64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.3-cp310-abi3-manylinux_2_24_aarch64.whl -
Subject digest:
68467cb84dbc5228726b290f2c1443eeceaec3f47c459d755f093326bff2fae2 - Sigstore transparency entry: 1186124101
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@389b6802d570190b3130c61267e99fe2019d8e2e -
Branch / Tag:
refs/tags/v0.3.3 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@389b6802d570190b3130c61267e99fe2019d8e2e -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.3-cp310-abi3-macosx_11_0_arm64.whl.
File metadata
- Download URL: mem7-0.3.3-cp310-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 5.7 MB
- Tags: CPython 3.10+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ebcfa2f8113e343337028f65d49e48b00088471d9905d96f84e6e6856a080194
|
|
| MD5 |
de5d1bacbe67a5e47cdd0c40a8946d8a
|
|
| BLAKE2b-256 |
0d7783bc06949e6b9c59dabbb410eede29e690ba5d66624d5e0597735f3cb677
|
Provenance
The following attestation bundles were made for mem7-0.3.3-cp310-abi3-macosx_11_0_arm64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.3-cp310-abi3-macosx_11_0_arm64.whl -
Subject digest:
ebcfa2f8113e343337028f65d49e48b00088471d9905d96f84e6e6856a080194 - Sigstore transparency entry: 1186124154
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@389b6802d570190b3130c61267e99fe2019d8e2e -
Branch / Tag:
refs/tags/v0.3.3 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@389b6802d570190b3130c61267e99fe2019d8e2e -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.3-cp310-abi3-macosx_10_12_x86_64.whl.
File metadata
- Download URL: mem7-0.3.3-cp310-abi3-macosx_10_12_x86_64.whl
- Upload date:
- Size: 5.9 MB
- Tags: CPython 3.10+, macOS 10.12+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6604bcbf353d4289a2adc7f641f6e4218bbe998dabb371011d823a1475c84c8d
|
|
| MD5 |
1caef81217a07476a8b1a172fed3808c
|
|
| BLAKE2b-256 |
6c25be82c330f19205aae06cdc48d3862d350de872b8ae5185c23f105e899a61
|
Provenance
The following attestation bundles were made for mem7-0.3.3-cp310-abi3-macosx_10_12_x86_64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.3-cp310-abi3-macosx_10_12_x86_64.whl -
Subject digest:
6604bcbf353d4289a2adc7f641f6e4218bbe998dabb371011d823a1475c84c8d - Sigstore transparency entry: 1186124067
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@389b6802d570190b3130c61267e99fe2019d8e2e -
Branch / Tag:
refs/tags/v0.3.3 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@389b6802d570190b3130c61267e99fe2019d8e2e -
Trigger Event:
push
-
Statement type:
File details
Details for the file mem7-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: mem7-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 6.3 MB
- Tags: CPython 3.8, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
168817006966db2feed8515ad910f7cf13671bd4adaebe5bd871971ee12ea6e0
|
|
| MD5 |
b810fd36d090cf2438ded4014b193115
|
|
| BLAKE2b-256 |
c196b2c2ff6b7aeb79f08bf66c0a1d9ed7521a614fb149bf543c1db91d37b3e2
|
Provenance
The following attestation bundles were made for mem7-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:
Publisher:
release.yml on mem7ai/mem7
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mem7-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl -
Subject digest:
168817006966db2feed8515ad910f7cf13671bd4adaebe5bd871971ee12ea6e0 - Sigstore transparency entry: 1186124122
- Sigstore integration time:
-
Permalink:
mem7ai/mem7@389b6802d570190b3130c61267e99fe2019d8e2e -
Branch / Tag:
refs/tags/v0.3.3 - Owner: https://github.com/mem7ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@389b6802d570190b3130c61267e99fe2019d8e2e -
Trigger Event:
push
-
Statement type: