Skip to main content

Embedded vector database using the TurboQuant algorithm (arXiv:2504.19874) — zero training, 2-4 bit compression, fast inner-product search

Project description

TurboQuantDB

License PyPI

An embedded vector database written in Rust with Python bindings, implementing the TurboQuant algorithm (arXiv:2504.19874) — zero training time, 2–4 bit compression, and provably unbiased inner product estimation.

Goal: make massive embedding datasets practical on lightweight hardware. A 100k-vector, 1536-dim collection that would occupy 586 MB as raw float32 fits in 108 MB on disk with TQDB b=4, or just 59 MB with b=2 — enabling laptop-scale RAG over millions of documents without a dedicated server.

Two deployment modes:

  • Embeddedtqdb Python package (pip install tqdb), runs in-process (no daemon)
  • Server — Axum HTTP service in server/, with multi-tenancy, RBAC, quotas, and async jobs

Key Properties

  • Zero training — No train() step. Vectors are quantized and stored immediately on insert.
  • 5–10× compression — b=4 reduces 1536-dim float32 embeddings from 586 MB to 108 MB (5.4×); b=2 reaches 59 MB (9.9×) at 100k vectors.
  • Unbiased scoring — QJL transform guarantees unbiased inner product estimation.
  • Optional ANN index — Build an HNSW graph after loading data for fast approximate search.
  • Metadata filtering — MongoDB-style filter operators on any metadata field.
  • Crash recovery — Write-ahead log (WAL) ensures durability without explicit flushing.
  • Python native — Built with PyO3 and Maturin; no server or sidecar required.

Installation

Prerequisites

  • Rust stable toolchain
  • Python 3.10+
  • C++ compiler: Visual Studio Build Tools (Windows) · xcode-select --install (macOS) · build-essential (Linux)

Build from source

python -m venv venv
source venv/bin/activate        # Windows: .\venv\Scripts\activate
pip install maturin
maturin develop --release

Install pre-built wheel

pip install tqdb

Recommended Setup

Two presets cover most use cases — no indexing required to get started:

from tqdb import Database

# Recommended — brute-force with dequantization reranking
db = Database.open(path, dimension=DIM, bits=4, rerank=True)
results = db.search(query, top_k=10)
# 95.5% Recall@1, 100% Recall@4 at 100k×1536  |  108 MB disk  |  ~50ms p50

# Minimum disk — 9.9× compression, still excellent recall
db = Database.open(path, dimension=DIM, bits=2, rerank=True)
results = db.search(query, top_k=10)
# 86.8% Recall@1, 99.3% Recall@4 at 100k×1536  |  60 MB disk  |  ~43ms p50

# Optional: build an HNSW index after bulk load for sub-10ms queries
db.create_index()
results = db.search(query, top_k=10, _use_ann=True)

Full parameter reference: docs/PYTHON_API.md


Quick Start

import numpy as np
from tqdb import Database

db = Database.open("./my_db", dimension=1536, bits=4, metric="ip", rerank=True)

db.insert("doc-1", np.random.randn(1536).astype("f4"), metadata={"topic": "ml"}, document="Machine learning intro")
db.insert("doc-2", np.random.randn(1536).astype("f4"), metadata={"topic": "systems"}, document="Rust memory model")

results = db.search(np.random.randn(1536).astype("f4"), top_k=5)
for r in results:
    print(r["id"], r["score"], r["document"])

Python API

Full reference: docs/PYTHON_API.md

# Open / create
db = Database.open(path, dimension, bits=4, seed=42, metric="ip",
                   rerank=True, fast_mode=False, rerank_precision=None,
                   collection=None, wal_flush_threshold=None)  # wal_flush_threshold default=5000; set higher for bulk loads

# Write
db.insert(id, vector, metadata=None, document=None)
db.insert_batch(ids, vectors, metadatas=None, documents=None, mode="insert")  # "insert"|"upsert"|"update"
db.upsert(id, vector, metadata=None, document=None)
db.update(id, vector, metadata=None, document=None)        # RuntimeError if not found
db.update_metadata(id, metadata=None, document=None)       # RuntimeError if not found

# Delete & retrieve
db.delete(id)                        # → bool
db.delete_batch(ids)                 # → int (count deleted)
db.get(id)                           # → {id, metadata, document} | None
db.get_many(ids)                     # → list[dict | None]
db.list_all()                        # → list[str]
db.list_ids(where_filter=None, limit=None, offset=0)       # paginated
db.count(filter=None)                # → int
db.stats()                           # → dict
len(db) / "id" in db                 # container protocol

# Search — brute-force by default; pass _use_ann=True to use HNSW index
results = db.search(query, top_k=10, filter=None, _use_ann=False,
                    ann_search_list_size=None, include=None)
# include: list of "id"|"score"|"metadata"|"document" (default all)
# ann_search_list_size: HNSW ef_search override (only used when _use_ann=True)

all_results = db.query(query_embeddings, n_results=10, where_filter=None)
# query_embeddings: np.ndarray (N, D) — returns list[list[dict]]

# Index
db.create_index(max_degree=32, ef_construction=200, n_refinements=5,
                search_list_size=128, alpha=1.2)

# Metadata filter operators
# $eq $ne $gt $gte $lt $lte $in $nin $exists $and $or
db.search(query, top_k=5, filter={"year": {"$gte": 2023}})
db.search(query, top_k=5, filter={"$and": [{"topic": "ml"}, {"year": {"$gte": 2023}}]})

Recommended Presets

Recommended — brute-force + reranking

db = Database.open(path, dimension=DIM, bits=4, rerank=True)
results = db.search(query, top_k=10, _use_ann=False)
# 95.5% Recall@1, 100% Recall@4 at 100k×1536  |  108 MB disk  |  ~50ms p50 (brute-force)

Minimum Disk — compress aggressively

db = Database.open(path, dimension=DIM, bits=2, rerank=True)
results = db.search(query, top_k=10, _use_ann=False)
# 86.8% Recall@1, 99.3% Recall@4 at 100k×1536  |  60 MB disk (9.9× smaller)  |  ~43ms p50

Optional — ANN index for lower latency

# Build once after inserting data; recall scales with ann_search_list_size
db.create_index()
results = db.search(query, top_k=10, _use_ann=True, ann_search_list_size=200)

Benchmarks

Three datasets from arXiv:2504.19874 — n=100k vectors each. Full script: benchmarks/paper_recall_bench.py.

Algorithm Validation — Recall vs Paper

Benchmark recall curves — TQDB vs paper

Brute-force recall across all three datasets from arXiv:2504.19874 Figure 5 — n=100k vectors, paper values read visually from plots (approximate).

GloVe-200 (d=200, 100,000 corpus, 10,000 queries)

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5a) ≈55.0% ≈70.0% ≈83.0% ≈91.0% ≈96.0% ≈99.0% ≈100.0%
TQDB b=2 rerank=F 37.1% 50.0% 62.0% 73.0% 82.0% 88.9% 93.5%
TQDB b=2 rerank=T 52.8% 68.4% 81.1% 90.3% 95.5% 98.4% 99.5%
TurboQuant 4-bit (paper Fig. 5a) ≈86.0% ≈96.0% ≈99.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=4 rerank=F 73.9% 88.3% 96.4% 99.2% 99.9% 100.0% 100.0%
TQDB b=4 rerank=T 82.6% 94.2% 98.7% 99.9% 100.0% 100.0% 100.0%

DBpedia OpenAI3 d=1536 (d=1536, 100,000 corpus, 1,000 queries)

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5b) ≈89.5% ≈98.0% ≈99.5% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=2 rerank=F 79.7% 93.3% 98.3% 99.7% 99.9% 100.0% 100.0%
TQDB b=2 rerank=T 86.8% 96.2% 99.3% 99.9% 100.0% 100.0% 100.0%
TurboQuant 4-bit (paper Fig. 5b) ≈97.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=4 rerank=F 92.6% 99.1% 99.9% 100.0% 100.0% 100.0% 100.0%
TQDB b=4 rerank=T 95.5% 99.5% 100.0% 100.0% 100.0% 100.0% 100.0%

DBpedia OpenAI3 d=3072 (d=3072, 100,000 corpus, 1,000 queries)

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5c) ≈90.5% ≈98.5% ≈99.5% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=2 rerank=F 84.6% 95.1% 99.0% 100.0% 100.0% 100.0% 100.0%
TQDB b=2 rerank=T 89.2% 98.6% 99.8% 100.0% 100.0% 100.0% 100.0%
TurboQuant 4-bit (paper Fig. 5c) ≈97.5% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=4 rerank=F 94.8% 99.1% 100.0% 100.0% 100.0% 100.0% 100.0%
TQDB b=4 rerank=T 96.0% 99.8% 100.0% 100.0% 100.0% 100.0% 100.0%

The GloVe gap (~12–18% at k=1) is expected: d=200 is the hardest case (fewest bits per dimension), and we evaluate on the first 100k vectors from a 1.18M corpus while the paper used a random sample. From k=4 onward the gap is ≤2.6% on GloVe and ≤1% on DBpedia. For high-dimensional embeddings (d≥1536), TQDB matches the paper within ~5% at k=1 and within 1% from k=4.

Performance & Config Trade-offs

Config trade-off overview — latency, disk, RAM, CPU

All 8 configs — brute-force and ANN (HNSW md=32, ef=128). Disk MB for ANN includes graph.bin. RAM = peak RSS during query phase. Index = HNSW build time (ANN only).

GloVe-200 (d=200, 100,000 corpus, 10,000 queries)

Config Mode Ingest Index Disk MB RAM MB p50 ms p99 ms R@1 MRR
b=2 rerank=F Brute 1.1s 16.8 206 13.97 18.73 37.1% 0.502
b=2 rerank=T Brute 1.4s 16.8 209 18.89 21.49 52.8% 0.666
b=4 rerank=F Brute 1.9s 22.9 214 17.60 38.56 73.9% 0.842
b=4 rerank=T Brute 2.5s 22.9 216 20.72 42.72 82.6% 0.900
b=2 rerank=F ANN 1.9s 19.3s 25.4 234 7.82 28.38 21.5% 0.282
b=2 rerank=T ANN 1.3s 19.5s 25.4 240 11.39 32.27 37.4% 0.460
b=4 rerank=F ANN 1.8s 18.1s 31.5 246 6.59 23.02 45.1% 0.501
b=4 rerank=T ANN 2.3s 17.6s 31.5 245 11.54 34.35 61.2% 0.658

DBpedia OpenAI3 d=1536 (d=1536, 100,000 corpus, 1,000 queries)

Config Mode Ingest Index Disk MB RAM MB p50 ms p99 ms R@1 MRR
b=2 rerank=F Brute 4.5s 59.5 759 39.98 45.60 79.7% 0.882
b=2 rerank=T Brute 7.5s 59.5 812 46.81 72.57 86.8% 0.926
b=4 rerank=F Brute 8.0s 108.3 811 46.15 50.99 92.6% 0.961
b=4 rerank=T Brute 8.8s 108.3 861 53.25 58.44 95.5% 0.977
b=2 rerank=F ANN 6.4s 69.1s 68.1 775 11.62 14.88 75.0% 0.827
b=2 rerank=T ANN 6.7s 68.9s 68.1 776 35.38 50.56 83.9% 0.893
b=4 rerank=F ANN 7.8s 69.3s 116.9 824 11.75 16.25 88.4% 0.915
b=4 rerank=T ANN 9.1s 70.2s 116.9 824 39.84 56.82 93.7% 0.958

DBpedia OpenAI3 d=3072 (d=3072, 100,000 corpus, 1,000 queries)

Config Mode Ingest Index Disk MB RAM MB p50 ms p99 ms R@1 MRR
b=2 rerank=F Brute 8.1s 108.3 1401 76.44 87.06 84.6% 0.913
b=2 rerank=T Brute 11.4s 108.3 1419 86.75 99.18 89.2% 0.943
b=4 rerank=F Brute 16.8s 206.0 1497 89.86 101.15 94.8% 0.972
b=4 rerank=T Brute 17.7s 206.0 1518 101.23 113.31 96.0% 0.980
b=2 rerank=F ANN 11.3s 128.0s 117.0 1416 15.94 22.14 80.7% 0.867
b=2 rerank=T ANN 10.1s 128.4s 116.9 1418 63.59 92.24 87.2% 0.921
b=4 rerank=F ANN 17.6s 127.5s 214.6 1514 17.60 26.17 90.3% 0.926
b=4 rerank=T ANN 17.4s 129.3s 214.6 1516 72.50 106.91 94.8% 0.967

Reproduction: maturin develop --release && python benchmarks/paper_recall_bench.py --update-readme --track (requires pip install datasets psutil matplotlib)


RAG Integration

from tqdb.rag import TurboQuantRetriever

retriever = TurboQuantRetriever(db_path="./rag_db", dimension=1536, bits=4)
retriever.add_texts(texts=texts, embeddings=embeddings, metadatas=metadatas)

results = retriever.similarity_search(query_embedding=query_vec, k=5)
for r in results:
    print(r["score"], r["text"])

Architecture

TurboQuantDB is an embedded database — it runs in-process with no daemon.

./my_db/
├── manifest.json        — DB config (dimension, bits, seed, metric)
├── quantizer.bin        — Serialized quantizer state
├── live_codes.bin       — Memory-mapped quantized vectors (hot path)
├── live_vectors.bin     — Raw vectors for exact reranking (only if rerank_precision="f16" or "f32")
├── wal.log              — Write-ahead log
├── metadata.bin         — Per-vector metadata and documents
├── live_ids.bin         — ID → slot index
├── graph.bin            — HNSW adjacency list (if index built)
└── seg-XXXXXXXX.bin     — Immutable flushed segment files

Write path: insert() → quantize (QR rotation → MSE → Gaussian QJL) → WAL → live_codes.bin → flush to segment

Search (brute-force): query → precompute lookup tables → score all live vectors → top-k

Search (ANN): query → HNSW beam search → rerank → top-k

Quantization: Two-stage pipeline:

  1. MSE — QR rotation + Lloyd-Max scalar quantization to bits per coordinate
  2. QJL — Dense Gaussian projection, 1-bit quantized, bit-packed

The combination gives unbiased inner product estimates with near-optimal distortion, requiring no training data.

What comes from the paper vs. what is added here

The TurboQuant paper contributes the quantization algorithm — how to compress vectors and estimate inner products accurately. Its experiments use flat (exhaustive) search: all database vectors are scored against every query using the LUT-based asymmetric scorer. The paper's "indexing time virtually zero" claim refers to the quantizer requiring no training data, not to graph construction.

From the paper: two-stage MSE + QJL quantization, QR rotation, Lloyd-Max codebook, asymmetric LUT scoring, unbiased inner product estimation.

Added by TurboQuantDB (not in the paper): WAL persistence, memory-mapped storage, metadata/documents, HNSW graph index, reranking, Python bindings, and the HTTP server.

The brute-force search path (_use_ann=False, the default) is the paper-conformant mode — it scores all vectors using TurboQuant's LUT scorer, matching the paper's experimental setup exactly. The HNSW index is a practical engineering addition that reduces the candidate set before scoring, enabling sub-linear search at the cost of approximate recall. Pass _use_ann=True to engage the HNSW index (requires create_index() to have been called first).

Module Map

Path Responsibility
src/python/mod.rs Database class — Python-facing API
src/storage/engine.rs TurboQuantEngine — insert/search/delete orchestration
src/storage/wal.rs Write-ahead log
src/storage/segment.rs Immutable append-only segments
src/storage/live_codes.rs Memory-mapped hot vector cache
src/storage/graph.rs HNSW graph index
src/quantizer/prod.rs ProdQuantizer — MSE + QJL orchestrator
src/quantizer/mse.rs MseQuantizer — QR rotation + Lloyd-Max codebook
src/quantizer/qjl.rs QjlQuantizer — 1-bit Gaussian projection, bit-packed
python/tqdb/rag.py TurboQuantRetriever — LangChain-style wrapper
server/ Optional Axum HTTP service (separate Cargo workspace)

Server Mode

Status: experimental. The server crate compiles and the core endpoints work, but it has not been hardened for production use. The embedded library (tqdb Python package, from tqdb import Database) is the primary supported interface.

An optional Axum-based HTTP server is available in server/ for multi-tenant deployments. It adds API key authentication, quota enforcement, and async job management (compaction, index building, snapshots).

cd server && cargo build --release
TQ_SERVER_ADDR=0.0.0.0:8080 TQ_LOCAL_ROOT=./data ./target/release/tqdb-server

See server/README.md for the full endpoint reference. Key env vars:

Variable Default Description
TQ_SERVER_ADDR 127.0.0.1:8080 Bind address
TQ_LOCAL_ROOT ./data Storage root
TQ_JOB_WORKERS 2 Async job thread count

Performance Roadmap

The current implementation uses SIMD-accelerated scoring (AVX2) for the brute-force search inner loop, the MSE centroid scan, and the QJL bit-unpack inner product. The FWHT transform (legacy SRHT path) also has an AVX2 fast path.

GPU acceleration — batch ingest would benefit from cuBLAS GEMM (~3–5× for large batches on high-end cards). The ANN search path is memory-bound, not compute-bound, so GPU benefit there is minimal; the bottleneck is random cache misses during HNSW graph traversal rather than floating-point throughput.

AVX-512 codebook scan — on modern Intel CPUs the MSE centroid lookup can be vectorised 2× wider with AVX-512, potentially halving scoring latency per batch.

Persistent HNSW — incremental graph updates (no full rebuild after each ingest batch) would allow streaming use cases without periodic create_index() calls.


Research Basis

This is an independent implementation of ideas from the TurboQuant paper. The algorithm itself was authored by the original researchers.

Zandieh, A., Daliri, M., Hadian, M., & Mirrokni, V. (2025). TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate. arXiv:2504.19874

@article{zandieh2025turboquant,
  title={TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate},
  author={Zandieh, Amir and Daliri, Majid and Hadian, Majid and Mirrokni, Vahab},
  journal={arXiv preprint arXiv:2504.19874},
  year={2025}
}

License

Apache License 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tqdb-0.4.0.tar.gz (869.6 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tqdb-0.4.0-cp313-cp313-win_amd64.whl (769.7 kB view details)

Uploaded CPython 3.13Windows x86-64

tqdb-0.4.0-cp313-cp313-manylinux_2_28_aarch64.whl (939.6 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

tqdb-0.4.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (971.3 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

tqdb-0.4.0-cp313-cp313-macosx_11_0_arm64.whl (866.8 kB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

tqdb-0.4.0-cp313-cp313-macosx_10_12_x86_64.whl (887.8 kB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

tqdb-0.4.0-cp312-cp312-win_amd64.whl (769.7 kB view details)

Uploaded CPython 3.12Windows x86-64

tqdb-0.4.0-cp312-cp312-manylinux_2_28_aarch64.whl (939.7 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

tqdb-0.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (971.3 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tqdb-0.4.0-cp312-cp312-macosx_11_0_arm64.whl (866.8 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

tqdb-0.4.0-cp312-cp312-macosx_10_12_x86_64.whl (887.8 kB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

tqdb-0.4.0-cp311-cp311-win_amd64.whl (770.9 kB view details)

Uploaded CPython 3.11Windows x86-64

tqdb-0.4.0-cp311-cp311-manylinux_2_28_aarch64.whl (941.5 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

tqdb-0.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (971.5 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tqdb-0.4.0-cp311-cp311-macosx_11_0_arm64.whl (868.9 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

tqdb-0.4.0-cp311-cp311-macosx_10_12_x86_64.whl (890.8 kB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

tqdb-0.4.0-cp310-cp310-win_amd64.whl (773.4 kB view details)

Uploaded CPython 3.10Windows x86-64

tqdb-0.4.0-cp310-cp310-manylinux_2_28_aarch64.whl (941.5 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

tqdb-0.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (970.7 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tqdb-0.4.0-cp310-cp310-macosx_11_0_arm64.whl (870.8 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

tqdb-0.4.0-cp310-cp310-macosx_10_12_x86_64.whl (893.2 kB view details)

Uploaded CPython 3.10macOS 10.12+ x86-64

File details

Details for the file tqdb-0.4.0.tar.gz.

File metadata

  • Download URL: tqdb-0.4.0.tar.gz
  • Upload date:
  • Size: 869.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.4.0.tar.gz
Algorithm Hash digest
SHA256 7826ac1cbb3c8d27cdd8af3a833b7cb8d282d95f7d56680e171077c214c509d1
MD5 247dffab98e8b8c2aa8a9a98735eb2de
BLAKE2b-256 97f2feb06e5845b78862c607486d8eaa92d337f564440b77483c874d7872e304

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.4.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 769.7 kB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.4.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 e7d2a6487f7af4965951cd2630d419813086899db5171df0815bd59204164787
MD5 2dac3a44b44933085df687ef05c1bf1b
BLAKE2b-256 d38cfc0a7391db994d10dd379e4b8fd976ed5c7204bafd5d8283236e2a8720c8

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5bc3b7618d6e3a316e547198cb6173321960621dd0466458c254b020bd55c612
MD5 0010d075b4e197134ef47edefeaa2fa9
BLAKE2b-256 f53d4086f5a98735a8b1f3860cebcde659d4edc4096395817c82cfb82fc88a03

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9715cb0b71592417957aa26a3643517714ae81d5b011243a8bfc6e8dc0054aac
MD5 b07f3497d75b15c7b293805f8bc035d8
BLAKE2b-256 aeddfebec78f9d3b3b9d66a48ee6e3266ae01ddcadce77a74286e4f94659a47a

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 cf22aa7f5c76e6abf56d1b207902901244be708cbc9c025596c7652f9bb6a432
MD5 15581f918b4d38b8875a927b010297ee
BLAKE2b-256 68c8c3065a96df19bbed22960cdc4e7b5bbcfa6b3a72396882acd94e03ad5933

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 11ab47ddd581586d2fc78ed6901c436c35a65aa8ee3b5142836d9440045d0776
MD5 c06360aebf07c8704af49f64b1a69224
BLAKE2b-256 244ca0cf39ccb7a623be15d1dd325b2da987e8578e3329df8b1522475760b564

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.4.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 769.7 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.4.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 b9c0161484d4d9adf9c1aec58e53a34ab5a773071d22a10606c0ea361d288578
MD5 e8e2716a8877f25e906a80a079e86d49
BLAKE2b-256 8d2932cf805b87aca8ebc2187cfb349d23a78a0b46270a87f4a9bfecbeae3bfb

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 0da4016a389938fda6848dfe324ed50add00d68e7aa0af700d5a79332bccf14b
MD5 bb964f9c6628a95d44f8c41015bd53e6
BLAKE2b-256 b2a9426d0fab771c20ae38b50c1ad440b5f412e120df221e5d81729fdad69fc7

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1641210337862b36beabd9936fc5c60ba6b63d2892e70e0c4640570155c65e9a
MD5 a89be15955621288ec65ce54a12bb04f
BLAKE2b-256 6e9edfe552f3d8ff7aa25f74a9e6705163bb1fe4c5559c21990f4f59f3f6485f

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e2eb1e78af6199ba4408a4509ada934323835ab1382f9340dfd0f963ea9a4113
MD5 29cbe7e85b7fa46b57fe5a63755452ca
BLAKE2b-256 f631e0bf328be7c1e9acba0ceb05ee999bd77f766100c3425dd87e97e78fb178

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 7f72142d45039fd5a334e6077a2e913e309faf0c07db98339301085869b902a1
MD5 802fb2335fbcc2834599c774e30a6a16
BLAKE2b-256 29549835b126a0fdec5189edd3ab917af35c9ad0b822cd59d3496a2ad9c7b8a4

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.4.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 770.9 kB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.4.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 6b5e1a4c245d87322fecdcfd48136beade8ae98a25e5346c96c0f0343e1127e1
MD5 74b37581d50bd43c9ef6b779e0df9eba
BLAKE2b-256 0432d5e519f530877e520d7325ec5337620567b2f5008bed6f56af35403c8297

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 b5c46258960b4bd1e0c3498157a41e9302d23abfa90349157a11a089ae0fae15
MD5 4bd247f2190d0780be9012e3f42a0725
BLAKE2b-256 6550ab419cb478e05658eb6443ed4294729ccf60e06638ad8501040b0cc7aaf3

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 193e4508d418a3ed2123fc53d1b0237b14fb5efc24f211c0138fb1d8e18f97d4
MD5 bd54cf43282cd127b1607805432a994a
BLAKE2b-256 0d622e9a66bcea9a0d67af5fc883655a3adf517208fc17c808461692238c39a4

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9602b9403a0973be01fcb50fb70e73125c45d2b49543f3d2565273d28363139d
MD5 200951fdf3a48c95c23cc1039b719b07
BLAKE2b-256 ab6cb783e9919f55f3579bd4aba871584f7446ee0d97635b476a1a31abd74f5b

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 da475b86df833aef698521fb7de81457ffb6d4f4d21f6c42d9140cf4eaba4025
MD5 dc5e4691dbbbd50e4382fa1c5f352355
BLAKE2b-256 ea630caa52f4b6cd014ae906484075e2aa5c2051b99098927309047b98f661c6

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.4.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 773.4 kB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.4.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 5ed31718d7d9af60da6bdf7b2817679335c133df78b3b65ebaedccb091ebda4b
MD5 c37216a2d85f3c9c822dd19d727f7c63
BLAKE2b-256 6eee2fee6f2588c4c9bd71471dbbebfc13c4300e850eb6d1cfd1f1490aedaf16

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 f5bb95aba723b939c7bd3ab196a4804f1410c84045b526c9afea568c88a71f2a
MD5 8311da9519a89e4ede0f0089e513d65c
BLAKE2b-256 c930dfea2c41c0bc4500e3deb9b11bc3770ce1620cc8dc2edf8d368987f42246

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3fa06dc453c138d7609397caf92e22338e94c4b212d9ae81161e95bdb69d15e9
MD5 16c973e33edc8cf5014f9b3034b30c58
BLAKE2b-256 26aa73ae2366ad3661657d0ed0c15d2cb53cb0763dda20f6b230a7c23b445fd8

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 28012f0526c8a4a368331acf0bd468cd4bfcd4262751c6314ddb0745a2c80d47
MD5 f08923e128caee74048e45dfc241d1f7
BLAKE2b-256 cbdeaba362896b1773e3d1c5c37e584e9a05a5003a073b123eb838163d66675c

See more details on using hashes here.

File details

Details for the file tqdb-0.4.0-cp310-cp310-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.4.0-cp310-cp310-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 15f87aec4e3058c6ab34396aa682ea26c3da1da1cab82cd73367d579739481e3
MD5 23494c073b583c2013afa42d9d73a7f7
BLAKE2b-256 a728d5d19cdc49835c3e08cfb2d792a07d2c1830b31eb9b92952bb0e74c29123

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page