Skip to main content

Embedded vector database using the TurboQuant algorithm (arXiv:2504.19874) — zero training, 2-4 bit compression, fast inner-product search

Project description

TurboQuantDB

License PyPI

An embedded vector database written in Rust with Python bindings, implementing the TurboQuant algorithm (arXiv:2504.19874) — zero training time, 2–4 bit compression, and provably unbiased inner product estimation.

Goal: make massive embedding datasets practical on lightweight hardware. A 100k-vector, 1536-dim collection that would occupy 586 MB as raw float32 fits in 108 MB on disk with TQDB b=4, or just 59 MB with b=2 — enabling laptop-scale RAG over millions of documents without a dedicated server.

Two deployment modes:

  • Embeddedtqdb Python package (pip install tqdb), runs in-process (no daemon)
  • Server — Axum HTTP service in server/, with multi-tenancy, RBAC, quotas, and async jobs

Key Properties

  • Zero training — No train() step. Vectors are quantized and stored immediately on insert.
  • 5–10× compression — b=4 reduces 1536-dim float32 embeddings from 586 MB to 108 MB (5.4×); b=2 reaches 59 MB (9.9×) at 100k vectors.
  • Unbiased scoring — QJL transform guarantees unbiased inner product estimation.
  • Optional ANN index — Build an HNSW graph after loading data for fast approximate search.
  • Metadata filtering — MongoDB-style filter operators on any metadata field.
  • Crash recovery — Write-ahead log (WAL) ensures durability without explicit flushing.
  • Python native — Built with PyO3 and Maturin; no server or sidecar required.

Installation

Prerequisites

  • Rust stable toolchain
  • Python 3.10+
  • C++ compiler: Visual Studio Build Tools (Windows) · xcode-select --install (macOS) · build-essential (Linux)

Build from source

python -m venv venv
source venv/bin/activate        # Windows: .\venv\Scripts\activate
pip install maturin
maturin develop --release

Install pre-built wheel

pip install tqdb

Recommended Setup

Two presets cover most use cases — no indexing required to get started:

from tqdb import Database

# Recommended — brute-force with dequantization reranking
db = Database.open(path, dimension=DIM, bits=4, rerank=True)
results = db.search(query, top_k=10)
# 95.5% Recall@1, 100% Recall@4 at 100k×1536  |  108 MB disk  |  ~50ms p50

# Minimum disk — 9.9× compression, still excellent recall
db = Database.open(path, dimension=DIM, bits=2, rerank=True)
results = db.search(query, top_k=10)
# 86.8% Recall@1, 99.3% Recall@4 at 100k×1536  |  60 MB disk  |  ~43ms p50

# Optional: build an HNSW index after bulk load for sub-10ms queries
db.create_index()
results = db.search(query, top_k=10, _use_ann=True)

Full parameter reference: docs/PYTHON_API.md


Quick Start

import numpy as np
from tqdb import Database

db = Database.open("./my_db", dimension=1536, bits=4, metric="ip", rerank=True)

db.insert("doc-1", np.random.randn(1536).astype("f4"), metadata={"topic": "ml"}, document="Machine learning intro")
db.insert("doc-2", np.random.randn(1536).astype("f4"), metadata={"topic": "systems"}, document="Rust memory model")

results = db.search(np.random.randn(1536).astype("f4"), top_k=5)
for r in results:
    print(r["id"], r["score"], r["document"])

Python API

Full reference: docs/PYTHON_API.md

# Open / create
db = Database.open(path, dimension, bits=4, seed=42, metric="ip",
                   rerank=True, fast_mode=False, rerank_precision=None,
                   collection=None, wal_flush_threshold=None)  # wal_flush_threshold default=5000; set higher for bulk loads

# Write
db.insert(id, vector, metadata=None, document=None)
db.insert_batch(ids, vectors, metadatas=None, documents=None, mode="insert")  # "insert"|"upsert"|"update"
db.upsert(id, vector, metadata=None, document=None)
db.update(id, vector, metadata=None, document=None)        # RuntimeError if not found
db.update_metadata(id, metadata=None, document=None)       # RuntimeError if not found

# Delete & retrieve
db.delete(id)                        # → bool
db.delete_batch(ids)                 # → int (count deleted)
db.get(id)                           # → {id, metadata, document} | None
db.get_many(ids)                     # → list[dict | None]
db.list_all()                        # → list[str]
db.list_ids(where_filter=None, limit=None, offset=0)       # paginated
db.count(filter=None)                # → int
db.stats()                           # → dict
len(db) / "id" in db                 # container protocol

# Search — brute-force by default; pass _use_ann=True to use HNSW index
results = db.search(query, top_k=10, filter=None, _use_ann=False,
                    ann_search_list_size=None, include=None)
# include: list of "id"|"score"|"metadata"|"document" (default all)
# ann_search_list_size: HNSW ef_search override (only used when _use_ann=True)

all_results = db.query(query_embeddings, n_results=10, where_filter=None)
# query_embeddings: np.ndarray (N, D) — returns list[list[dict]]

# Index
db.create_index(max_degree=32, ef_construction=200, n_refinements=5,
                search_list_size=128, alpha=1.2)

# Metadata filter operators
# $eq $ne $gt $gte $lt $lte $in $nin $exists $and $or
db.search(query, top_k=5, filter={"year": {"$gte": 2023}})
db.search(query, top_k=5, filter={"$and": [{"topic": "ml"}, {"year": {"$gte": 2023}}]})

Recommended Presets

Recommended — brute-force + reranking

db = Database.open(path, dimension=DIM, bits=4, rerank=True)
results = db.search(query, top_k=10, _use_ann=False)
# 95.5% Recall@1, 100% Recall@4 at 100k×1536  |  108 MB disk  |  ~50ms p50 (brute-force)

Minimum Disk — compress aggressively

db = Database.open(path, dimension=DIM, bits=2, rerank=True)
results = db.search(query, top_k=10, _use_ann=False)
# 86.8% Recall@1, 99.3% Recall@4 at 100k×1536  |  60 MB disk (9.9× smaller)  |  ~43ms p50

Optional — ANN index for lower latency

# Build once after inserting data; recall scales with ann_search_list_size
db.create_index()
results = db.search(query, top_k=10, _use_ann=True, ann_search_list_size=200)

Benchmarks

Measured on DBpedia OpenAI3 embeddings (Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M) — real 1536-dim embeddings, n=100k vectors, 500 queries, Recall@1@k metric. HNSW uses M=32, ef_construction=200.

Algorithm validation (reproducing paper Section 4.4)

Brute-force recall across all three datasets from arXiv:2504.19874 Figure 5 — n=100k vectors, paper values read visually from plots (approximate). Full script: benchmarks/paper_recall_bench.py.

Benchmark recall curves — TQDB vs paper

GloVe-200 (d=200, 100,000 corpus, 10,000 queries, metric=ip)

Recall@1@k — brute-force:

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5a) ≈55.0% ≈70.0% ≈83.0% ≈91.0% ≈96.0% ≈99.0% ≈100.0%
TQDB b=2 rerank=F 37.1% 50.0% 62.0% 73.0% 82.0% 88.9% 93.5%
TQDB b=2 rerank=T 52.8% 68.4% 81.1% 90.3% 95.5% 98.4% 99.5%
TurboQuant 4-bit (paper Fig. 5a) ≈86.0% ≈96.0% ≈99.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=4 rerank=F 73.9% 88.3% 96.4% 99.2% 99.9% 100.0% 100.0%
TQDB b=4 rerank=T 82.6% 94.2% 98.7% 99.9% 100.0% 100.0% 100.0%

Performance — brute-force:

Config Thruput vps Ingest Disk MB ΔRSS MB p50 ms p99 ms MRR
b=2 rerank=F 86,333 1.2s 16.4 34 12.72 14.29 0.502
b=2 rerank=T 78,762 1.3s 16.4 19 14.71 16.32 0.666
b=4 rerank=F 50,066 2.0s 22.5 25 12.96 14.43 0.842
b=4 rerank=T 45,659 2.2s 22.5 31 14.99 24.54 0.900
ANN configs — GloVe-200 (extra info)

Recall@1@k — ANN (HNSW):

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TQDB b=2 rerank=F ANN 9.5% 12.7% 15.3% 17.2% 18.6% 19.3% 19.6%
TQDB b=2 rerank=T ANN 21.2% 26.4% 30.1% 32.4% 33.3% 33.4% 33.4%
TQDB b=4 rerank=F ANN 23.1% 26.8% 28.2% 28.6% 28.7% 28.7% 28.7%
TQDB b=4 rerank=T ANN 36.2% 40.1% 41.3% 41.5% 41.5% 41.5% 41.5%

Performance — ANN:

Config Thruput vps Ingest Index Disk MB ΔRSS MB p50 ms p99 ms MRR
b=2 rerank=F ANN 87,290 1.1s 5.6s 16.4 25 0.40 0.99 0.124
b=2 rerank=T ANN 71,475 1.4s 5.2s 16.4 16 2.77 5.26 0.254
b=4 rerank=F ANN 53,778 1.9s 4.5s 22.5 27 0.43 0.99 0.255
b=4 rerank=T ANN 63,549 1.6s 4.6s 22.5 16 2.63 4.36 0.386

DBpedia OpenAI3 d=1536 (d=1536, 100,000 corpus, 1,000 queries, metric=ip)

Recall@1@k — brute-force:

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5b) ≈89.5% ≈98.0% ≈99.5% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=2 rerank=F 79.7% 93.3% 98.3% 99.7% 99.9% 100.0% 100.0%
TQDB b=2 rerank=T 86.8% 96.2% 99.3% 99.9% 100.0% 100.0% 100.0%
TurboQuant 4-bit (paper Fig. 5b) ≈97.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=4 rerank=F 92.6% 99.1% 99.9% 100.0% 100.0% 100.0% 100.0%
TQDB b=4 rerank=T 95.5% 99.5% 100.0% 100.0% 100.0% 100.0% 100.0%

Performance — brute-force:

Config Thruput vps Ingest Disk MB ΔRSS MB p50 ms p99 ms MRR
b=2 rerank=F 24,014 4.2s 59.1 79 41.49 50.05 0.882
b=2 rerank=T 22,985 4.4s 59.1 118 49.85 61.78 0.926
b=4 rerank=F 9,043 11.1s 108.0 178 49.82 60.09 0.961
b=4 rerank=T 13,013 7.7s 108.0 180 56.52 65.36 0.977
ANN configs — DBpedia OpenAI3 d=1536 (extra info)

Recall@1@k — ANN (HNSW):

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TQDB b=2 rerank=F ANN 58.8% 66.0% 69.1% 69.5% 69.7% 69.7% 69.7%
TQDB b=2 rerank=T ANN 73.3% 79.9% 82.0% 82.4% 82.4% 82.4% 82.4%
TQDB b=4 rerank=F ANN 69.2% 72.7% 73.0% 73.1% 73.1% 73.1% 73.1%
TQDB b=4 rerank=T ANN 78.9% 81.9% 82.1% 82.1% 82.1% 82.1% 82.1%

Performance — ANN:

Config Thruput vps Ingest Index Disk MB ΔRSS MB p50 ms p99 ms MRR
b=2 rerank=F ANN 22,026 4.5s 27.4s 59.1 69 2.31 4.09 0.634
b=2 rerank=T ANN 22,873 4.4s 27.6s 59.1 68 18.60 29.26 0.773
b=4 rerank=F ANN 11,346 8.8s 26.1s 108.0 181 2.67 4.97 0.711
b=4 rerank=T ANN 10,996 9.1s 27.0s 108.0 167 20.34 32.57 0.805

DBpedia OpenAI3 d=3072 (d=3072, 100,000 corpus, 1,000 queries, metric=ip)

Recall@1@k — brute-force:

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TurboQuant 2-bit (paper Fig. 5c) ≈90.5% ≈98.5% ≈99.5% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=2 rerank=F 84.6% 95.1% 99.0% 100.0% 100.0% 100.0% 100.0%
TQDB b=2 rerank=T 89.2% 98.6% 99.8% 100.0% 100.0% 100.0% 100.0%
TurboQuant 4-bit (paper Fig. 5c) ≈97.5% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0% ≈100.0%
TQDB b=4 rerank=F 94.8% 99.1% 100.0% 100.0% 100.0% 100.0% 100.0%
TQDB b=4 rerank=T 96.0% 99.8% 100.0% 100.0% 100.0% 100.0% 100.0%

Performance — brute-force:

Config Thruput vps Ingest Disk MB ΔRSS MB p50 ms p99 ms MRR
b=2 rerank=F 8,729 11.5s 108.0 154 73.72 86.39 0.913
b=2 rerank=T 11,799 8.5s 108.0 203 83.57 94.50 0.943
b=4 rerank=F 6,256 16.0s 205.6 320 85.62 98.89 0.972
b=4 rerank=T 5,689 17.6s 205.6 308 95.61 109.04 0.980
ANN configs — DBpedia OpenAI3 d=3072 (extra info)

Recall@1@k — ANN (HNSW):

Config @k=1 @k=2 @k=4 @k=8 @k=16 @k=32 @k=64
TQDB b=2 rerank=F ANN 60.9% 67.5% 69.9% 70.3% 70.3% 70.3% 70.3%
TQDB b=2 rerank=T ANN 76.3% 83.4% 84.0% 84.2% 84.2% 84.2% 84.2%
TQDB b=4 rerank=F ANN 67.9% 70.6% 71.0% 71.0% 71.0% 71.0% 71.0%
TQDB b=4 rerank=T ANN 80.9% 83.9% 83.9% 83.9% 83.9% 83.9% 83.9%

Performance — ANN:

Config Thruput vps Ingest Index Disk MB ΔRSS MB p50 ms p99 ms MRR
b=2 rerank=F ANN 12,305 8.1s 44.8s 108.0 196 4.22 7.60 0.650
b=2 rerank=T ANN 12,269 8.2s 44.0s 108.0 204 33.07 48.23 0.801
b=4 rerank=F ANN 5,968 16.8s 41.9s 205.6 306 4.81 9.30 0.694
b=4 rerank=T ANN 6,375 15.7s 43.9s 205.6 305 39.70 70.07 0.824

The GloVe gap (~12–18% at k=1) is expected: d=200 is the hardest case (fewest bits per dimension), and we evaluate on the first 100k vectors from a 1.18M corpus while the paper used a random sample. From k=4 onward the gap is ≤2.6% on GloVe and ≤1% on DBpedia. For high-dimensional embeddings (d≥1536), TQDB matches the paper within ~5% at k=1 and within 1% from k=4.

Reproduction: maturin develop --release && python benchmarks/paper_recall_bench.py --update-readme --track (requires pip install datasets psutil matplotlib)


RAG Integration

from tqdb.rag import TurboQuantRetriever

retriever = TurboQuantRetriever(db_path="./rag_db", dimension=1536, bits=4)
retriever.add_texts(texts=texts, embeddings=embeddings, metadatas=metadatas)

results = retriever.similarity_search(query_embedding=query_vec, k=5)
for r in results:
    print(r["score"], r["text"])

Architecture

TurboQuantDB is an embedded database — it runs in-process with no daemon.

./my_db/
├── manifest.json        — DB config (dimension, bits, seed, metric)
├── quantizer.bin        — Serialized quantizer state
├── live_codes.bin       — Memory-mapped quantized vectors (hot path)
├── live_vectors.bin     — Raw vectors for exact reranking (only if rerank_precision="f16" or "f32")
├── wal.log              — Write-ahead log
├── metadata.bin         — Per-vector metadata and documents
├── live_ids.bin         — ID → slot index
├── graph.bin            — HNSW adjacency list (if index built)
└── seg-XXXXXXXX.bin     — Immutable flushed segment files

Write path: insert() → quantize (QR rotation → MSE → Gaussian QJL) → WAL → live_codes.bin → flush to segment

Search (brute-force): query → precompute lookup tables → score all live vectors → top-k

Search (ANN): query → HNSW beam search → rerank → top-k

Quantization: Two-stage pipeline:

  1. MSE — QR rotation + Lloyd-Max scalar quantization to bits per coordinate
  2. QJL — Dense Gaussian projection, 1-bit quantized, bit-packed

The combination gives unbiased inner product estimates with near-optimal distortion, requiring no training data.

What comes from the paper vs. what is added here

The TurboQuant paper contributes the quantization algorithm — how to compress vectors and estimate inner products accurately. Its experiments use flat (exhaustive) search: all database vectors are scored against every query using the LUT-based asymmetric scorer. The paper's "indexing time virtually zero" claim refers to the quantizer requiring no training data, not to graph construction.

From the paper: two-stage MSE + QJL quantization, QR rotation, Lloyd-Max codebook, asymmetric LUT scoring, unbiased inner product estimation.

Added by TurboQuantDB (not in the paper): WAL persistence, memory-mapped storage, metadata/documents, HNSW graph index, reranking, Python bindings, and the HTTP server.

The brute-force search path (_use_ann=False, the default) is the paper-conformant mode — it scores all vectors using TurboQuant's LUT scorer, matching the paper's experimental setup exactly. The HNSW index is a practical engineering addition that reduces the candidate set before scoring, enabling sub-linear search at the cost of approximate recall. Pass _use_ann=True to engage the HNSW index (requires create_index() to have been called first).

Module Map

Path Responsibility
src/python/mod.rs Database class — Python-facing API
src/storage/engine.rs TurboQuantEngine — insert/search/delete orchestration
src/storage/wal.rs Write-ahead log
src/storage/segment.rs Immutable append-only segments
src/storage/live_codes.rs Memory-mapped hot vector cache
src/storage/graph.rs HNSW graph index
src/quantizer/prod.rs ProdQuantizer — MSE + QJL orchestrator
src/quantizer/mse.rs MseQuantizer — QR rotation + Lloyd-Max codebook
src/quantizer/qjl.rs QjlQuantizer — 1-bit Gaussian projection, bit-packed
python/tqdb/rag.py TurboQuantRetriever — LangChain-style wrapper
server/ Optional Axum HTTP service (separate Cargo workspace)

Server Mode

Status: experimental. The server crate compiles and the core endpoints work, but it has not been hardened for production use. The embedded library (tqdb Python package, from tqdb import Database) is the primary supported interface.

An optional Axum-based HTTP server is available in server/ for multi-tenant deployments. It adds API key authentication, quota enforcement, and async job management (compaction, index building, snapshots).

cd server && cargo build --release
TQ_SERVER_ADDR=0.0.0.0:8080 TQ_LOCAL_ROOT=./data ./target/release/tqdb-server

See server/README.md for the full endpoint reference. Key env vars:

Variable Default Description
TQ_SERVER_ADDR 127.0.0.1:8080 Bind address
TQ_LOCAL_ROOT ./data Storage root
TQ_JOB_WORKERS 2 Async job thread count

Performance Roadmap

The current implementation uses SIMD-accelerated scoring (AVX2) for the brute-force search inner loop, the MSE centroid scan, and the QJL bit-unpack inner product. The FWHT transform (legacy SRHT path) also has an AVX2 fast path.

GPU acceleration — batch ingest would benefit from cuBLAS GEMM (~3–5× for large batches on high-end cards). The ANN search path is memory-bound, not compute-bound, so GPU benefit there is minimal; the bottleneck is random cache misses during HNSW graph traversal rather than floating-point throughput.

AVX-512 codebook scan — on modern Intel CPUs the MSE centroid lookup can be vectorised 2× wider with AVX-512, potentially halving scoring latency per batch.

Persistent HNSW — incremental graph updates (no full rebuild after each ingest batch) would allow streaming use cases without periodic create_index() calls.


Research Basis

This is an independent implementation of ideas from the TurboQuant paper. The algorithm itself was authored by the original researchers.

Zandieh, A., Daliri, M., Hadian, M., & Mirrokni, V. (2025). TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate. arXiv:2504.19874

@article{zandieh2025turboquant,
  title={TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate},
  author={Zandieh, Amir and Daliri, Majid and Hadian, Majid and Mirrokni, Vahab},
  journal={arXiv preprint arXiv:2504.19874},
  year={2025}
}

License

Apache License 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tqdb-0.3.0.tar.gz (843.7 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tqdb-0.3.0-cp313-cp313-win_amd64.whl (760.7 kB view details)

Uploaded CPython 3.13Windows x86-64

tqdb-0.3.0-cp313-cp313-manylinux_2_28_aarch64.whl (934.1 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

tqdb-0.3.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (965.2 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

tqdb-0.3.0-cp313-cp313-macosx_11_0_arm64.whl (863.4 kB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

tqdb-0.3.0-cp313-cp313-macosx_10_12_x86_64.whl (882.8 kB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

tqdb-0.3.0-cp312-cp312-win_amd64.whl (760.7 kB view details)

Uploaded CPython 3.12Windows x86-64

tqdb-0.3.0-cp312-cp312-manylinux_2_28_aarch64.whl (934.1 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

tqdb-0.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (965.2 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tqdb-0.3.0-cp312-cp312-macosx_11_0_arm64.whl (863.4 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

tqdb-0.3.0-cp312-cp312-macosx_10_12_x86_64.whl (882.8 kB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

tqdb-0.3.0-cp311-cp311-win_amd64.whl (762.5 kB view details)

Uploaded CPython 3.11Windows x86-64

tqdb-0.3.0-cp311-cp311-manylinux_2_28_aarch64.whl (936.4 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

tqdb-0.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (965.3 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tqdb-0.3.0-cp311-cp311-macosx_11_0_arm64.whl (864.6 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

tqdb-0.3.0-cp311-cp311-macosx_10_12_x86_64.whl (885.9 kB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

tqdb-0.3.0-cp310-cp310-win_amd64.whl (764.8 kB view details)

Uploaded CPython 3.10Windows x86-64

tqdb-0.3.0-cp310-cp310-manylinux_2_28_aarch64.whl (934.2 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

tqdb-0.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (964.5 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tqdb-0.3.0-cp310-cp310-macosx_11_0_arm64.whl (866.4 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

tqdb-0.3.0-cp310-cp310-macosx_10_12_x86_64.whl (888.1 kB view details)

Uploaded CPython 3.10macOS 10.12+ x86-64

File details

Details for the file tqdb-0.3.0.tar.gz.

File metadata

  • Download URL: tqdb-0.3.0.tar.gz
  • Upload date:
  • Size: 843.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.3.0.tar.gz
Algorithm Hash digest
SHA256 174495c0554d6eb4a8480825a5a04c48e713671caced584c913aa4bf6cc2a2bc
MD5 02cec8aeae1dde372e36101b5c66e66f
BLAKE2b-256 b551f66247d27bcda4ab09f2fb59f3a56adad8b2bfd6511863a08c3355e1b99f

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.3.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 760.7 kB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.3.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 880ea9e6fbe2a00b9d4b56dd111f8a49560c1d867781c123ff1fa8ab11d9513b
MD5 3dc2f2768be79f13b92345f1387e2cf4
BLAKE2b-256 4376c38c9a4a06a26521bcb61f4099a055f4dcbb91d4feecfa8ea1441c41d90c

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 b40b3b860f0ebe1d72ab17f1a03ee96a45a3535d4d2e65ca2ca4c93ed61f3f5a
MD5 75c5ef141197e9155b087e7fe8a560f4
BLAKE2b-256 58b8c563acb3b54f5347dc7ab65482caf9914a0d106572e274287c3a6d4af7b2

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 26f2cb10d1ad44a7073bf4cb5f9c5a9e70bac8e023baeee01030de1657a3630e
MD5 496fa45593860ee6a0859c8aa56b78eb
BLAKE2b-256 3a3430c06561d95aef23fac6ebeef936aa665a62aa85d227786757e903e02f46

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 4c012c225fc51a14426f6f110fd963af3f0553c4726e97e4ae864206db152494
MD5 eae916814621bf99d6755c5e7ad615e8
BLAKE2b-256 5be12f322566b3839deafd7e9f6c70d07beded69b00238c61e38e004cd0c593a

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 e438f5e2ce61d8533305b8c7f117dfda987363068af4687291a47d7a232b1173
MD5 08d1c4808baf5abbeb7d06852c374940
BLAKE2b-256 3c17c708c5c56635ba2f6cb3549ab11c28e078d547012fb3b1f9cc57ea8acbf5

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.3.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 760.7 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.3.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 19b1408a7816e08d4aaccaddd7e590c0f232a214a2f8b19b053ac2e02970e7d6
MD5 da77858c2fc80ca194f05e2b9186fe8a
BLAKE2b-256 cc3e50747006303e542c89074d6d5e627566b630bf66daa6a3d10fd503cd2555

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 7c86067730544d8039143ebb8b1aefc217f52342553cdc77c527ff4935145f92
MD5 cd9cfc85ce2d7940485b1913b0ae49ef
BLAKE2b-256 bdb393d95aced9dc9f99d36490179d84187fabd8f1ab2fe47dff8247f78748d5

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6f4e77080bdd5733522abe0a3fe43ea0232e7d3699148fe5b53ef47051ff85bc
MD5 dc0962b5611d19ae497b59635c436bc6
BLAKE2b-256 1e336c1a161617c660cd1a8925bcfa5b3ac99baac2b23d7003dea58966044b35

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9308317aa602d6c9f875e788c96eeeb2988f9ca89e824ebbcddf54ec5a590a68
MD5 ed7793956482a204a2717f2374c5d328
BLAKE2b-256 89ad8db4c7ba23c75b41da7a14a5826647193ebe52c0f80d374afa971ece4bd5

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 153f351796bff196e7d827547a7f05820367e5e3de3fa85382a88fc0499d24ed
MD5 e277230b8f112023b7ae3c14c3bb938a
BLAKE2b-256 67918034762a4f770b6e178f39362dfa973497fe4bbd2942a0dbecbff4c2fa05

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.3.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 762.5 kB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.3.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 1c7321ed37b58d825e435f858e35082cf20322d7c3915447d3a284135417e872
MD5 da5254464e0a3b91bd2704c7691bbea2
BLAKE2b-256 71b5404b90a67cf10967aa8c0eef9831b6e097f80a38c4158799ab788d0e352f

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 13deed2ceda1bffdcbb32339d8d1f012508521709973a6d47a9720725074d092
MD5 7a8b5a89754e7fb7226f103efd35582e
BLAKE2b-256 12fd2b143a95235084e265c7eabe67702906ee2a84e058aa46d1227cebd00e81

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d7492294ab5ff918bc0ed750e44a027d2931cc398a0b3657dcd9743162bec336
MD5 a4a6d5d0abda3b809ce182cd82f46e08
BLAKE2b-256 4994e03a64550d92297335f621427015e05cde9ed3ce29d5c89b6bb7d99aa2b8

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a3c1fffc78f4ca97979795a29f6341b43d76b9ddd19b999f40ce8406d89b2c3a
MD5 98126c1c4e720a98d569a2d3535e3b90
BLAKE2b-256 e355e76664bbfc78457efaae0ee8c8db6a870234b8c0d3b869a72f0f5c2da4d8

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 450f05676a78ddc1240fb242e1a6722d1779e07a1b84ef23c550f513d54b9d5c
MD5 c65b8778fc6120908abe65f6258c5364
BLAKE2b-256 52620912eee07f6744ea5c579a72cd5d3766fbfc7dfb73c16754a9d54b03a734

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.3.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 764.8 kB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.3.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 25e24e94418f1cb3cf4089d111b493bbd37a805fe5066ec913502b61e32939db
MD5 f06de94aeae659cf01e71923da9b4c2e
BLAKE2b-256 067fa6cf0281afaeac87c7615c2431abc92bba07fd0357a5b9bfef1c85cb9787

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 82262791fe2a86d66a5724ba3f505aabff105903d2c12c9c380af1d3a35bfb40
MD5 86d9dd88fcfee3d9cd14f98d5573c66b
BLAKE2b-256 7ed5f12d5debedb9a64fa52a3f32da992a9709d30c223c9742e6c16e9a5b66ba

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 869740bc533688befd111e92a23e806878761e47f04ec3984032ee580f7a8df1
MD5 ced98f7308675e8d685691e54a2b82c7
BLAKE2b-256 4e90573d5ad64987f25e242bb16884edd1fcb15bbd48c54b6b4dda40a7dcef5c

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 943c7d10ab49be498a2cfd822dcbf46f1636d4a0580cb2a0acec357876c0b047
MD5 e5758e6e3cfae90afe098b0ab04fecd5
BLAKE2b-256 9be428d481a3dd704c5fe4632d830fe0d233569efd23c4d6999d7acda6e86cc0

See more details on using hashes here.

File details

Details for the file tqdb-0.3.0-cp310-cp310-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.3.0-cp310-cp310-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 110c310747cfebd7b6d656b12c521d513627087eda5025ba8d1ca7827e562e8f
MD5 c14a6f9a0322910b1531abac50e06c40
BLAKE2b-256 74cc21b3acd4bb4649a0fb2800aac538195528d2f6ecd5ee84992b2775dca706

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page