Skip to main content

Embedded vector database using the TurboQuant algorithm (arXiv:2504.19874) — zero training, 2-4 bit compression, fast inner-product search

Project description

TurboQuantDB

License PyPI

An embedded vector database written in Rust with Python bindings, implementing the TurboQuant algorithm (arXiv:2504.19874) — zero training time, 2–4 bit compression, and provably unbiased inner product estimation.

Goal: make massive embedding datasets practical on lightweight hardware. A 100k-vector, 1536-dim collection that would occupy 586 MB as raw float32 fits in 108 MB on disk with TQDB b=4, or just 59 MB with b=2 — enabling laptop-scale RAG over millions of documents without a dedicated server.

Two deployment modes:

  • Embeddedtqdb Python package (pip install tqdb), runs in-process (no daemon)
  • Server — Axum HTTP service in server/, with multi-tenancy, RBAC, quotas, and async jobs

Key Properties

  • Zero training — No train() step. Vectors are quantized and stored immediately on insert.
  • 5–10× compression — b=4 reduces 1536-dim float32 embeddings from 586 MB to 108 MB (5.4×); b=2 reaches 59 MB (9.9×) at 100k vectors.
  • Unbiased scoring — QJL transform guarantees unbiased inner product estimation.
  • Optional ANN index — Build an HNSW graph after loading data for fast approximate search.
  • Metadata filtering — MongoDB-style filter operators on any metadata field.
  • Crash recovery — Write-ahead log (WAL) ensures durability without explicit flushing.
  • Python native — Built with PyO3 and Maturin; no server or sidecar required.

Installation

Prerequisites

  • Rust stable toolchain
  • Python 3.10+
  • C++ compiler: Visual Studio Build Tools (Windows) · xcode-select --install (macOS) · build-essential (Linux)

Build from source

python -m venv venv
source venv/bin/activate        # Windows: .\venv\Scripts\activate
pip install maturin
maturin develop --release

Install pre-built wheel

pip install tqdb

Recommended Setup

Three presets covering the main use cases — pick one and you're ready:

from tqdb import Database

# High Quality — best recall, exact reranking
db = Database.open(path, dimension=DIM, bits=4, rerank=True, rerank_precision="f16")
db.create_index(max_degree=32, ef_construction=200, n_refinements=8)
results = db.search(query, top_k=10, ann_search_list_size=200)
# ~100% Recall@10 at 100k×1536  |  401 MB disk  |  38ms p50 (brute-force)

# Balanced — recommended default (dequant reranking, zero extra disk)
db = Database.open(path, dimension=DIM, bits=4, rerank=True)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# ~99.4% Recall@5 at 100k×1536  |  117 MB disk  |  59ms rerank / 8ms no-rerank

# Fast ANN — lowest latency, good recall
db = Database.open(path, dimension=DIM, bits=4, rerank=False)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# ~96% Recall@10 at 100k×1536  |  117 MB disk  |  8ms p50

Full parameter reference: docs/PYTHON_API.md


Quick Start

import numpy as np
from tqdb import Database

db = Database.open("./my_db", dimension=1536, bits=4, metric="ip", rerank=True)

db.insert("doc-1", np.random.randn(1536).astype("f4"), metadata={"topic": "ml"}, document="Machine learning intro")
db.insert("doc-2", np.random.randn(1536).astype("f4"), metadata={"topic": "systems"}, document="Rust memory model")

results = db.search(np.random.randn(1536).astype("f4"), top_k=5)
for r in results:
    print(r["id"], r["score"], r["document"])

Python API

Full reference: docs/PYTHON_API.md

# Open / create
db = Database.open(path, dimension, bits=4, seed=42, metric="ip",
                   rerank=True, fast_mode=False, rerank_precision=None,
                   collection=None)   # collection= → opens path/collection/

# Write
db.insert(id, vector, metadata=None, document=None)
db.insert_batch(ids, vectors, metadatas=None, documents=None, mode="insert")  # "insert"|"upsert"|"update"
db.upsert(id, vector, metadata=None, document=None)
db.update(id, vector, metadata=None, document=None)        # RuntimeError if not found
db.update_metadata(id, metadata=None, document=None)       # RuntimeError if not found

# Delete & retrieve
db.delete(id)                        # → bool
db.delete_batch(ids)                 # → int (count deleted)
db.get(id)                           # → {id, metadata, document} | None
db.get_many(ids)                     # → list[dict | None]
db.list_all()                        # → list[str]
db.list_ids(where_filter=None, limit=None, offset=0)       # paginated
db.count(filter=None)                # → int
db.stats()                           # → dict
len(db) / "id" in db                 # container protocol

# Search
results = db.search(query, top_k=10, filter=None, _use_ann=True,
                    ann_search_list_size=None, include=None)
# include: list of "id"|"score"|"metadata"|"document" (default all)

all_results = db.query(query_embeddings, n_results=10, where_filter=None)
# query_embeddings: np.ndarray (N, D) — returns list[list[dict]]

# Index
db.create_index(max_degree=32, ef_construction=200, n_refinements=5,
                search_list_size=128, alpha=1.2)

# Metadata filter operators
# $eq $ne $gt $gte $lt $lte $in $nin $exists $contains $and $or
db.search(query, top_k=5, filter={"year": {"$gte": 2023}})
db.search(query, top_k=5, filter={"$and": [{"topic": "ml"}, {"year": {"$gte": 2023}}]})

Recommended Presets

High Quality — exact reranking

db = Database.open(path, dimension=DIM, bits=4, rerank=True, rerank_precision="f16")
db.create_index(max_degree=32, ef_construction=200, n_refinements=8)
results = db.search(query, top_k=10, ann_search_list_size=200)
# 100% Recall@10 at 100k×1536  |  38ms p50 (brute-force)  |  401 MB disk

Balanced — default recommendation

db = Database.open(path, dimension=DIM, bits=4, rerank=True)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# 99.4% Recall@5, 96% Recall@10 at 100k×1536  |  117 MB disk  |  8ms (ANN) / 45ms (brute+dequant)

Minimum Disk — compress aggressively

db = Database.open(path, dimension=DIM, bits=2, rerank=True)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# 96.4% Recall@10 at 100k×1536  |  68 MB disk (8.7× smaller than float32)  |  7ms p50

Benchmarks

Measured on DBpedia OpenAI3 embeddings (Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M) — real 1536-dim embeddings, n=100k vectors, 500 queries, Recall@1@k metric. HNSW uses M=32, ef_construction=200.

Algorithm validation (reproducing paper Section 4.4)

Brute-force recall across all three datasets from arXiv:2504.19874 Figure 5 — n=100k vectors, paper values read visually from plots (approximate). Full script: benchmarks/paper_recall_bench.py.

GloVe-200 (d=200, 100k corpus, 10k queries, brute-force)

Method @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5a) ≈55.0% ≈70.0% ≈83.0% ≈91.0% ≈96.0% ≈99.0% ≈100%
TQDB b=2 37.1% 50.0% 62.0% 73.0% 82.1% 88.9% 93.5%
TurboQuant 4-bit (paper Fig. 5a) ≈86.0% ≈96.0% ≈99.0% ≈100% 100% 100% 100%
TQDB b=4 73.9% 88.3% 96.4% 99.2% 99.9% 100% 100%

DBpedia OpenAI3 d=1536 (d=1536, 100k corpus, 1k queries, brute-force)

Method @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5b) ≈89.5% ≈98.0% ≈99.5% ≈100% 100% 100% 100%
TQDB b=2 79.7% 93.3% 98.3% 99.7% 99.9% 100% 100%
TurboQuant 4-bit (paper Fig. 5b) ≈97.0% ≈100% 100% 100% 100% 100% 100%
TQDB b=4 92.6% 99.1% 99.9% 100% 100% 100% 100%

DBpedia OpenAI3 d=3072 (d=3072, 100k corpus, 1k queries, brute-force)

Method @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5c) ≈90.5% ≈98.5% ≈99.5% ≈100% 100% 100% 100%
TQDB b=2 84.6% 95.1% 99.0% 100% 100% 100% 100%
TurboQuant 4-bit (paper Fig. 5c) ≈97.5% ≈100% 100% 100% 100% 100% 100%
TQDB b=4 94.8% 99.1% 100% 100% 100% 100% 100%

The GloVe gap (~12–18% at k=1) is expected: d=200 is the hardest case (fewest bits per dimension), and we evaluate on the first 100k vectors from a 1.18M corpus while the paper used a random sample. From k=4 onward the gap is ≤2.6% on GloVe and ≤1% on DBpedia. For high-dimensional embeddings (d≥1536), TQDB matches the paper within ~5% at k=1 and within 1% from k=4. The paper also reports TurboQuant quantization time 0.001 s versus Product Quantization 240 s at d=1536 — TQDB inherits the same zero-training-time property.

Full results — n=100k × 1536-dim, brute-force Recall@1@k:

Mode Recall@1 Recall@10 Disk Compression p50 latency
b=4 brute, no-rerank 93.6% 100% 108 MB 5.4× 54ms
b=4 brute, dequant-rerank 93.2% 100% 108 MB 5.4× 45ms
b=4 brute, f16-rerank 100% 100% 401 MB 1.5× 38ms
b=2 brute, no-rerank 81.4% 100% 59 MB 9.9× 41ms
b=2 brute, dequant-rerank 84.4% 100% 59 MB 9.9× 46ms
b=4 HNSW M=32, no-rerank 89.8% 96.0% 117 MB 5.0× 8ms
b=4 HNSW M=32, dequant-rerank 92.6% 99.4% 117 MB 5.0× 59ms
b=2 HNSW M=32, no-rerank 78.6% 96.4% 68 MB 8.7× 7ms
float32 brute (reference) 100% 100% 586 MB ~120ms

dequant-rerank = rerank from codebook (zero extra disk). f16-rerank = store float16 raw vectors (+293 MB).

Reproduction: build the extension with maturin develop --release, then run python benchmarks/run_recall_bench.py — requires pip install datasets tqdb


RAG Integration

from tqdb.rag import TurboQuantRetriever

retriever = TurboQuantRetriever(db_path="./rag_db", dimension=1536, bits=4)
retriever.add_texts(texts=texts, embeddings=embeddings, metadatas=metadatas)

results = retriever.similarity_search(query_embedding=query_vec, k=5)
for r in results:
    print(r["score"], r["text"])

Architecture

TurboQuantDB is an embedded database — it runs in-process with no daemon.

./my_db/
├── manifest.json        — DB config (dimension, bits, seed, metric)
├── quantizer.bin        — Serialized quantizer state
├── live_codes.bin       — Memory-mapped quantized vectors (hot path)
├── live_vectors.bin     — Raw vectors for exact reranking (only if rerank_precision="f16" or "f32")
├── wal.log              — Write-ahead log
├── metadata.bin         — Per-vector metadata and documents
├── live_ids.bin         — ID → slot index
├── graph.bin            — HNSW adjacency list (if index built)
└── seg-XXXXXXXX.bin     — Immutable flushed segment files

Write path: insert() → quantize (QR rotation → MSE → Gaussian QJL) → WAL → live_codes.bin → flush to segment

Search (brute-force): query → precompute lookup tables → score all live vectors → top-k

Search (ANN): query → HNSW beam search → rerank → top-k

Quantization: Two-stage pipeline:

  1. MSE — QR rotation + Lloyd-Max scalar quantization to bits per coordinate
  2. QJL — Dense Gaussian projection, 1-bit quantized, bit-packed

The combination gives unbiased inner product estimates with near-optimal distortion, requiring no training data.

What comes from the paper vs. what is added here

The TurboQuant paper contributes the quantization algorithm — how to compress vectors and estimate inner products accurately. Its experiments use flat (exhaustive) search: all database vectors are scored against every query using the LUT-based asymmetric scorer. The paper's "indexing time virtually zero" claim refers to the quantizer requiring no training data, not to graph construction.

From the paper: two-stage MSE + QJL quantization, QR rotation, Lloyd-Max codebook, asymmetric LUT scoring, unbiased inner product estimation.

Added by TurboQuantDB (not in the paper): WAL persistence, memory-mapped storage, metadata/documents, HNSW graph index, reranking, Python bindings, and the HTTP server.

The brute-force search path (_use_ann=False) is the paper-conformant mode — it scores all vectors using TurboQuant's LUT scorer, matching the paper's experimental setup exactly. The HNSW index is a practical engineering addition that reduces the candidate set before scoring, enabling sub-linear search at the cost of approximate recall.

Module Map

Path Responsibility
src/python/mod.rs Database class — Python-facing API
src/storage/engine.rs TurboQuantEngine — insert/search/delete orchestration
src/storage/wal.rs Write-ahead log
src/storage/segment.rs Immutable append-only segments
src/storage/live_codes.rs Memory-mapped hot vector cache
src/storage/graph.rs HNSW graph index
src/quantizer/prod.rs ProdQuantizer — MSE + QJL orchestrator
src/quantizer/mse.rs MseQuantizer — QR rotation + Lloyd-Max codebook
src/quantizer/qjl.rs QjlQuantizer — 1-bit Gaussian projection, bit-packed
python/tqdb/rag.py TurboQuantRetriever — LangChain-style wrapper
server/ Optional Axum HTTP service (separate Cargo workspace)

Server Mode

Status: experimental. The server crate compiles and the core endpoints work, but it has not been hardened for production use. The embedded library (tqdb Python package, from tqdb import Database) is the primary supported interface.

An optional Axum-based HTTP server is available in server/ for multi-tenant deployments. It adds API key authentication, quota enforcement, and async job management (compaction, index building, snapshots).

cd server && cargo build --release
TQ_SERVER_ADDR=0.0.0.0:8080 TQ_LOCAL_ROOT=./data ./target/release/tqdb-server

See server/README.md for the full endpoint reference. Key env vars:

Variable Default Description
TQ_SERVER_ADDR 127.0.0.1:8080 Bind address
TQ_LOCAL_ROOT ./data Storage root
TQ_JOB_WORKERS 2 Async job thread count

Performance Roadmap

The current implementation already uses AVX2 SIMD for FWHT, the MSE centroid scan, and the QJL bit-unpack inner product.

GPU acceleration — batch ingest would benefit from cuBLAS GEMM (~3–5× for large batches on high-end cards). The ANN search path is memory-bound, not compute-bound, so GPU benefit there is minimal; the bottleneck is random cache misses during HNSW graph traversal rather than floating-point throughput.

AVX-512 codebook scan — on modern Intel CPUs the MSE centroid lookup can be vectorised 2× wider with AVX-512, potentially halving scoring latency per batch.

Persistent HNSW — incremental graph updates (no full rebuild after each ingest batch) would allow streaming use cases without periodic create_index() calls.


Research Basis

This is an independent implementation of ideas from the TurboQuant paper. The algorithm itself was authored by the original researchers.

Zandieh, A., Daliri, M., Hadian, M., & Mirrokni, V. (2025). TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate. arXiv:2504.19874

@article{zandieh2025turboquant,
  title={TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate},
  author={Zandieh, Amir and Daliri, Majid and Hadian, Majid and Mirrokni, Vahab},
  journal={arXiv preprint arXiv:2504.19874},
  year={2025}
}

License

Apache License 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tqdb-0.2.0.tar.gz (409.7 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tqdb-0.2.0-cp313-cp313-win_amd64.whl (709.5 kB view details)

Uploaded CPython 3.13Windows x86-64

tqdb-0.2.0-cp313-cp313-manylinux_2_28_aarch64.whl (883.5 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

tqdb-0.2.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (913.9 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

tqdb-0.2.0-cp313-cp313-macosx_11_0_arm64.whl (810.6 kB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

tqdb-0.2.0-cp313-cp313-macosx_10_12_x86_64.whl (834.3 kB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

tqdb-0.2.0-cp312-cp312-win_amd64.whl (709.5 kB view details)

Uploaded CPython 3.12Windows x86-64

tqdb-0.2.0-cp312-cp312-manylinux_2_28_aarch64.whl (883.5 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

tqdb-0.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (913.9 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tqdb-0.2.0-cp312-cp312-macosx_11_0_arm64.whl (810.6 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

tqdb-0.2.0-cp312-cp312-macosx_10_12_x86_64.whl (834.3 kB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

tqdb-0.2.0-cp311-cp311-win_amd64.whl (711.3 kB view details)

Uploaded CPython 3.11Windows x86-64

tqdb-0.2.0-cp311-cp311-manylinux_2_28_aarch64.whl (884.4 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

tqdb-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (914.5 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tqdb-0.2.0-cp311-cp311-macosx_11_0_arm64.whl (812.8 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

tqdb-0.2.0-cp311-cp311-macosx_10_12_x86_64.whl (837.5 kB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

tqdb-0.2.0-cp310-cp310-win_amd64.whl (713.9 kB view details)

Uploaded CPython 3.10Windows x86-64

tqdb-0.2.0-cp310-cp310-manylinux_2_28_aarch64.whl (884.5 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

tqdb-0.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (912.8 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tqdb-0.2.0-cp310-cp310-macosx_11_0_arm64.whl (815.0 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

tqdb-0.2.0-cp310-cp310-macosx_10_12_x86_64.whl (840.2 kB view details)

Uploaded CPython 3.10macOS 10.12+ x86-64

File details

Details for the file tqdb-0.2.0.tar.gz.

File metadata

  • Download URL: tqdb-0.2.0.tar.gz
  • Upload date:
  • Size: 409.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.2.0.tar.gz
Algorithm Hash digest
SHA256 71a3c55e8322e7d933f495323e21cb599446b0ff6e36c3323a9bc3d27d0f1044
MD5 bb8f5b9a8a50d83e2aae4bd25e67d501
BLAKE2b-256 2276618c91b17aaa6889f428f54051ec453d2a291360571fe014c31f82997f4a

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.2.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 709.5 kB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.2.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 e84aa230d166600579f764de4b98a2cfd0baad2d5a64289755cd71f7a43031d8
MD5 ee5fc74247ff6b36277eeefc6959f3a9
BLAKE2b-256 52297be2fdf8bfd608df3511c2505bcdc2fbb7b325d05314492f2b3c08813722

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 e98b301950f74778328dc964eef2b8eb0c189e7f297f82697195ce5960806f94
MD5 6d2edbfec9a20b1d1fd964b41091a141
BLAKE2b-256 8d7c249a2eb5fe441a6432c5fa5b841bee0fbce43ac4071ae2568e11aa9bd104

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ba312c4ef5470c34ba131fa446ea254d809b0341a9a89ade6ac9b6a75ba96332
MD5 036e2b0c32dbee28e0c8002831b775f5
BLAKE2b-256 100f77e1e7b743144e1d18cc1311f7b28e5dde2b10ea50f33204dba2d8b51707

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 bf3658a9a94acff89bcb07fe8f1dfb261296b5b23f6ead292a6d3624cea63227
MD5 730381a831849adec72be318412d00ec
BLAKE2b-256 0c8c6b898aa12f39e38da5c7adfe9e132337ec6eb39d603af6a97a1ea41ed2d4

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 b1be445d445bc370e0a5f81cbb1745f67dc1a4a3182c273b1fa451ec2a1ec06e
MD5 2f1109a0711357b4a731519e63923b81
BLAKE2b-256 8e91e384ff95038fa89225cdf37ad69630403b918173ea0f60a8c91dd2ee2769

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.2.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 709.5 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.2.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 5ff35f04aeaa27f015c8b7ce2d97cce9b46689e2fd5ac43149902e648fd43b00
MD5 6d1c1a552adaab54fa1a1ec07845bdf5
BLAKE2b-256 ad854764c759f96630abf7c164fcd7e0a89d2bfb018c79ab62d08a14343bbe12

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 feca01782e874dc363a52368955c59a8a7cfd058308fad20e2abfad4265c4726
MD5 b2c4b6a60d2b97f9c16a370e59f41d9d
BLAKE2b-256 3ef4f41a92dd83741b4ff2d7a2b9979f32a3578c81654a24e471043674d210cb

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f3f62184d5a2c301fffc1fe281518e6c8e7ac49e55cd8d519ac81dc81619ad8b
MD5 29ad97c4a86523d813d1386e283462d9
BLAKE2b-256 e5bb3ec2e058ef3dbd44626697db4c5838049e955baac84d65f03ad3a3c5405b

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 0e64709b8492d442a582f15593c169d0a6c72bf696263d948a518542c9291385
MD5 cfd278adda3a7607195bb399004472c9
BLAKE2b-256 95ca7295ad19cc1c2a818e9adef709dd6a3831095f51a017e428f637c46fa70d

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 a0ea282614faa78085d95b58274c7715937421532b37dbee83b07af84f5960b7
MD5 7ef8e167d896a9f29e941346db772a28
BLAKE2b-256 17cab12d982d9d868fc7452aec1c45a12776f74b431c64271577f67c02135dfd

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.2.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 711.3 kB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.2.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 1626781dad41763028ad811d640131dedab44e9dc35655f3d7e2dd8510bab5be
MD5 07a11c1bd6551b9e80f071e74ec94f2e
BLAKE2b-256 63dd5d8d33006c1d9ef832591ba7e8e3d134aec747826f0e4a75bf8f33b537fe

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 86330ccbb7186f625405b4a4a7bb2180405088cd7e51517ff6f48e3f2a1483b6
MD5 02171bbdc123d9a0d708e175b7c5dc27
BLAKE2b-256 7dfde6dbdff0283e4c0d97325bcad1f32eb95a60e4fbe522781a57e688890506

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 0ae1ba0978af27a8ae7bfb7fbee424cfff9e6dbae592d33510bc1e0ccf7f5f8f
MD5 a9e2068ecc89bd3c2e7d364c1b16951b
BLAKE2b-256 b323943bc4e48eb76af49df3013b708ee596d55276ab950d86b783d4709932ca

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 88a777186bc7dbb0ceb01da7812ff2c0291af4414426bd4a97464790239c3502
MD5 02ed3d06663dbba060db81648e7997e9
BLAKE2b-256 940fcc9617da6e1acca28230073508966c282b7bf1f00197774304aaba55cfa7

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 3f3d9b1986f2d41b71530dccf7bfb00fee0b42695b85da11de0bb6b16b0eef05
MD5 6737191929834e615a11318ccd4e9624
BLAKE2b-256 4be6056d97b86dabfacd1bfe43edf634c63ce10bd9acf5a45a859c3d8c8244ed

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.2.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 713.9 kB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.2.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 4e4a1ce8fdc62f1c17e65d02bd9adbd6f9f3026faa81875057a2239098502ad6
MD5 43d515dd2f4eea11dfded5d1bfbcfb07
BLAKE2b-256 b928b822b0b98e451e30a2179800dbf677bd014052203db26abf4b65219b1dcc

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 27a8c0393cbd0f3a25ba9daefbd0e3e7f552f93eea3f15a0fe10f3712a48553f
MD5 b76cdf86edc59bf5078d60af2972ca0f
BLAKE2b-256 467199d97bdf45a13719a1deae990e9169249a397f193a89b71eab54554baa18

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3df72ba732483cf8ba403e8a4df39a33653676ef2ac9ec41887d8f45913e1887
MD5 221f2677a5666bb7bd91bbaf313ad07a
BLAKE2b-256 30f508abbd8b732118cf369cacef414ea48ed5372c2f9325b7bebbcc99ec38fe

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fe7f0eda659f15c5c0c7142de53d43a73687664545fcfd6211b296ca80d257c6
MD5 792b7291cb9c3b98e03dce38434b345d
BLAKE2b-256 3a6dd50bde541073e8ce13218c7b4401b95df6e319a137f98894497a7a2ea6db

See more details on using hashes here.

File details

Details for the file tqdb-0.2.0-cp310-cp310-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.2.0-cp310-cp310-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 2cfd1b2fe966d304f53a83dc0b5401d0fbb8b1a32fdbcf0e8a8141cc1816b6fe
MD5 f74e15a681b55f8204789c298b16ccc1
BLAKE2b-256 3da92baf3933fe6ba4d1f2791a49bac6a3ab97b4f2179c07c44e860b7e2557f9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page