Skip to main content

Embedded vector database using the TurboQuant algorithm (arXiv:2504.19874) — zero training, 2-4 bit compression, fast inner-product search

Project description

TurboQuantDB

License PyPI

An embedded vector database written in Rust with Python bindings, implementing the TurboQuant algorithm (arXiv:2504.19874) — zero training time, 2–4 bit compression, and provably unbiased inner product estimation.

Goal: make massive embedding datasets practical on lightweight hardware. A 100k-vector, 1536-dim collection that would occupy 586 MB as raw float32 fits in 108 MB on disk with TQDB b=4, or just 59 MB with b=2 — enabling laptop-scale RAG over millions of documents without a dedicated server.

Two deployment modes:

  • Embeddedtqdb Python package (pip install tqdb), runs in-process (no daemon)
  • Server — Axum HTTP service in server/, with multi-tenancy, RBAC, quotas, and async jobs

Key Properties

  • Zero training — No train() step. Vectors are quantized and stored immediately on insert.
  • 5–10× compression — b=4 reduces 1536-dim float32 embeddings from 586 MB to 108 MB (5.4×); b=2 reaches 59 MB (9.9×) at 100k vectors.
  • Unbiased scoring — QJL transform guarantees unbiased inner product estimation.
  • Optional ANN index — Build an HNSW graph after loading data for fast approximate search.
  • Metadata filtering — MongoDB-style filter operators on any metadata field.
  • Crash recovery — Write-ahead log (WAL) ensures durability without explicit flushing.
  • Python native — Built with PyO3 and Maturin; no server or sidecar required.

Installation

Prerequisites

  • Rust stable toolchain
  • Python 3.10+
  • C++ compiler: Visual Studio Build Tools (Windows) · xcode-select --install (macOS) · build-essential (Linux)

Build from source

python -m venv venv
source venv/bin/activate        # Windows: .\venv\Scripts\activate
pip install maturin
maturin develop --release

Install pre-built wheel

pip install tqdb

Recommended Setup

Three presets covering the main use cases — pick one and you're ready:

from turboquantdb import Database

# High Quality — best recall, exact reranking
db = Database.open(path, dimension=DIM, bits=4, rerank=True, rerank_precision="f16")
db.create_index(max_degree=32, ef_construction=200, n_refinements=8)
results = db.search(query, top_k=10, ann_search_list_size=200)
# ~100% Recall@10 at 100k×1536  |  401 MB disk  |  38ms p50 (brute-force)

# Balanced — recommended default (dequant reranking, zero extra disk)
db = Database.open(path, dimension=DIM, bits=4, rerank=True)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# ~99.4% Recall@5 at 100k×1536  |  117 MB disk  |  59ms rerank / 8ms no-rerank

# Fast ANN — lowest latency, good recall
db = Database.open(path, dimension=DIM, bits=4, rerank=False)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# ~96% Recall@10 at 100k×1536  |  117 MB disk  |  8ms p50

Full parameter reference: docs/PYTHON_API.md


Quick Start

import numpy as np
from turboquantdb import Database

db = Database.open("./my_db", dimension=1536, bits=4, metric="ip", rerank=True)

db.insert("doc-1", np.random.randn(1536).astype("f4"), metadata={"topic": "ml"}, document="Machine learning intro")
db.insert("doc-2", np.random.randn(1536).astype("f4"), metadata={"topic": "systems"}, document="Rust memory model")

results = db.search(np.random.randn(1536).astype("f4"), top_k=5)
for r in results:
    print(r["id"], r["score"], r["document"])

Python API

Full reference: docs/PYTHON_API.md

# Open / create
db = Database.open(path, dimension, bits=4, seed=42, metric="ip",
                   rerank=True, fast_mode=False, rerank_precision=None,
                   collection=None)   # collection= → opens path/collection/

# Write
db.insert(id, vector, metadata=None, document=None)
db.insert_batch(ids, vectors, metadatas=None, documents=None, mode="insert")  # "insert"|"upsert"|"update"
db.upsert(id, vector, metadata=None, document=None)
db.update(id, vector, metadata=None, document=None)        # RuntimeError if not found
db.update_metadata(id, metadata=None, document=None)       # RuntimeError if not found

# Delete & retrieve
db.delete(id)                        # → bool
db.delete_batch(ids)                 # → int (count deleted)
db.get(id)                           # → {id, metadata, document} | None
db.get_many(ids)                     # → list[dict | None]
db.list_all()                        # → list[str]
db.list_ids(where_filter=None, limit=None, offset=0)       # paginated
db.count(filter=None)                # → int
db.stats()                           # → dict
len(db) / "id" in db                 # container protocol

# Search
results = db.search(query, top_k=10, filter=None, _use_ann=True,
                    ann_search_list_size=None, include=None)
# include: list of "id"|"score"|"metadata"|"document" (default all)

all_results = db.query(query_embeddings, n_results=10, where_filter=None)
# query_embeddings: np.ndarray (N, D) — returns list[list[dict]]

# Index
db.create_index(max_degree=32, ef_construction=200, n_refinements=5,
                search_list_size=128, alpha=1.2)

# Metadata filter operators
# $eq $ne $gt $gte $lt $lte $in $nin $exists $contains $and $or
db.search(query, top_k=5, filter={"year": {"$gte": 2023}})
db.search(query, top_k=5, filter={"$and": [{"topic": "ml"}, {"year": {"$gte": 2023}}]})

Recommended Presets

High Quality — exact reranking

db = Database.open(path, dimension=DIM, bits=4, rerank=True, rerank_precision="f16")
db.create_index(max_degree=32, ef_construction=200, n_refinements=8)
results = db.search(query, top_k=10, ann_search_list_size=200)
# 100% Recall@10 at 100k×1536  |  38ms p50 (brute-force)  |  401 MB disk

Balanced — default recommendation

db = Database.open(path, dimension=DIM, bits=4, rerank=True)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# 99.4% Recall@5, 96% Recall@10 at 100k×1536  |  117 MB disk  |  8ms (ANN) / 45ms (brute+dequant)

Minimum Disk — compress aggressively

db = Database.open(path, dimension=DIM, bits=2, rerank=True)
db.create_index(max_degree=32, ef_construction=200, n_refinements=5)
results = db.search(query, top_k=10, ann_search_list_size=200)
# 96.4% Recall@10 at 100k×1536  |  68 MB disk (8.7× smaller than float32)  |  7ms p50

Benchmarks

Measured on DBpedia OpenAI3 embeddings (Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M) — real 1536-dim embeddings, n=100k vectors, 500 queries, Recall@1@k metric. HNSW uses M=32, ef_construction=200.

Benchmark plots

n=100k × 1536-dim, brute-force Recall@1@k:

Mode Recall@1 Recall@10 Disk Compression p50 latency
b=4 brute, no-rerank 93.6% 100% 108 MB 5.4× 54ms
b=4 brute, dequant-rerank 93.2% 100% 108 MB 5.4× 45ms
b=4 brute, f16-rerank 100% 100% 401 MB 1.5× 38ms
b=2 brute, no-rerank 81.4% 100% 59 MB 9.9× 41ms
b=2 brute, dequant-rerank 84.4% 100% 59 MB 9.9× 46ms
b=4 HNSW M=32, no-rerank 89.8% 96.0% 117 MB 5.0× 8ms
b=4 HNSW M=32, dequant-rerank 92.6% 99.4% 117 MB 5.0× 59ms
b=2 HNSW M=32, no-rerank 78.6% 96.4% 68 MB 8.7× 7ms
float32 brute (reference) 100% 100% 586 MB ~120ms

dequant-rerank = rerank from codebook (zero extra disk). f16-rerank = store float16 raw vectors (+293 MB).

Reproduction: build the extension with maturin develop --release, then run python benchmarks/run_recall_bench.py — requires pip install datasets tqdb


RAG Integration

from turboquantdb.rag import TurboQuantRetriever

retriever = TurboQuantRetriever(db_path="./rag_db", dimension=1536, bits=4)
retriever.add_texts(texts=texts, embeddings=embeddings, metadatas=metadatas)

results = retriever.similarity_search(query_embedding=query_vec, k=5)
for r in results:
    print(r["score"], r["text"])

Architecture

TurboQuantDB is an embedded database — it runs in-process with no daemon.

./my_db/
├── manifest.json        — DB config (dimension, bits, seed, metric)
├── quantizer.bin        — Serialized quantizer state
├── live_codes.bin       — Memory-mapped quantized vectors (hot path)
├── live_vectors.bin     — Raw vectors for exact reranking (only if rerank_precision="f16" or "f32")
├── wal.log              — Write-ahead log
├── metadata.bin         — Per-vector metadata and documents
├── live_ids.bin         — ID → slot index
├── graph.bin            — HNSW adjacency list (if index built)
└── seg-XXXXXXXX.bin     — Immutable flushed segment files

Write path: insert() → quantize (QR rotation → MSE → Gaussian QJL) → WAL → live_codes.bin → flush to segment

Search (brute-force): query → precompute lookup tables → score all live vectors → top-k

Search (ANN): query → HNSW beam search → rerank → top-k

Quantization: Two-stage pipeline:

  1. MSE — QR rotation + Lloyd-Max scalar quantization to bits per coordinate
  2. QJL — Dense Gaussian projection, 1-bit quantized, bit-packed

The combination gives unbiased inner product estimates with near-optimal distortion, requiring no training data.

What comes from the paper vs. what is added here

The TurboQuant paper contributes the quantization algorithm — how to compress vectors and estimate inner products accurately. Its experiments use flat (exhaustive) search: all database vectors are scored against every query using the LUT-based asymmetric scorer. The paper's "indexing time virtually zero" claim refers to the quantizer requiring no training data, not to graph construction.

From the paper: two-stage MSE + QJL quantization, QR rotation, Lloyd-Max codebook, asymmetric LUT scoring, unbiased inner product estimation.

Added by TurboQuantDB (not in the paper): WAL persistence, memory-mapped storage, metadata/documents, HNSW graph index, reranking, Python bindings, and the HTTP server.

The brute-force search path (_use_ann=False) is the paper-conformant mode — it scores all vectors using TurboQuant's LUT scorer, matching the paper's experimental setup exactly. The HNSW index is a practical engineering addition that reduces the candidate set before scoring, enabling sub-linear search at the cost of approximate recall.

Module Map

Path Responsibility
src/python/mod.rs Database class — Python-facing API
src/storage/engine.rs TurboQuantEngine — insert/search/delete orchestration
src/storage/wal.rs Write-ahead log
src/storage/segment.rs Immutable append-only segments
src/storage/live_codes.rs Memory-mapped hot vector cache
src/storage/graph.rs HNSW graph index
src/quantizer/prod.rs ProdQuantizer — MSE + QJL orchestrator
src/quantizer/mse.rs MseQuantizer — QR rotation + Lloyd-Max codebook
src/quantizer/qjl.rs QjlQuantizer — 1-bit Gaussian projection, bit-packed
python/turboquantdb/rag.py TurboQuantRetriever — LangChain-style wrapper
server/ Optional Axum HTTP service (separate Cargo workspace)

Server Mode

Status: experimental. The server crate compiles and the core endpoints work, but it has not been hardened for production use. The embedded library (tqdb Python package, from turboquantdb import Database) is the primary supported interface.

An optional Axum-based HTTP server is available in server/ for multi-tenant deployments. It adds API key authentication, quota enforcement, and async job management (compaction, index building, snapshots).

cd server && cargo build --release
TQ_SERVER_ADDR=0.0.0.0:8080 TQ_LOCAL_ROOT=./data ./target/release/turboquantdb-server

See server/README.md for the full endpoint reference. Key env vars:

Variable Default Description
TQ_SERVER_ADDR 127.0.0.1:8080 Bind address
TQ_LOCAL_ROOT ./data Storage root
TQ_JOB_WORKERS 2 Async job thread count

Performance Roadmap

The current implementation already uses AVX2 SIMD for FWHT, the MSE centroid scan, and the QJL bit-unpack inner product.

GPU acceleration — batch ingest would benefit from cuBLAS GEMM (~3–5× for large batches on high-end cards). The ANN search path is memory-bound, not compute-bound, so GPU benefit there is minimal; the bottleneck is random cache misses during HNSW graph traversal rather than floating-point throughput.

AVX-512 codebook scan — on modern Intel CPUs the MSE centroid lookup can be vectorised 2× wider with AVX-512, potentially halving scoring latency per batch.

Persistent HNSW — incremental graph updates (no full rebuild after each ingest batch) would allow streaming use cases without periodic create_index() calls.


Research Basis

This is an independent implementation of ideas from the TurboQuant paper. The algorithm itself was authored by the original researchers.

Zandieh, A., Daliri, M., Hadian, M., & Mirrokni, V. (2025). TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate. arXiv:2504.19874

@article{zandieh2025turboquant,
  title={TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate},
  author={Zandieh, Amir and Daliri, Majid and Hadian, Majid and Mirrokni, Vahab},
  journal={arXiv preprint arXiv:2504.19874},
  year={2025}
}

License

Apache License 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tqdb-0.1.1.tar.gz (407.4 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tqdb-0.1.1-cp313-cp313-win_amd64.whl (710.3 kB view details)

Uploaded CPython 3.13Windows x86-64

tqdb-0.1.1-cp313-cp313-manylinux_2_28_aarch64.whl (882.5 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

tqdb-0.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (908.6 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

tqdb-0.1.1-cp313-cp313-macosx_11_0_arm64.whl (810.3 kB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

tqdb-0.1.1-cp313-cp313-macosx_10_12_x86_64.whl (831.5 kB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

tqdb-0.1.1-cp312-cp312-win_amd64.whl (710.3 kB view details)

Uploaded CPython 3.12Windows x86-64

tqdb-0.1.1-cp312-cp312-manylinux_2_28_aarch64.whl (882.5 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

tqdb-0.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (908.6 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tqdb-0.1.1-cp312-cp312-macosx_11_0_arm64.whl (810.3 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

tqdb-0.1.1-cp312-cp312-macosx_10_12_x86_64.whl (831.5 kB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

tqdb-0.1.1-cp311-cp311-win_amd64.whl (712.1 kB view details)

Uploaded CPython 3.11Windows x86-64

tqdb-0.1.1-cp311-cp311-manylinux_2_28_aarch64.whl (883.3 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

tqdb-0.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (909.5 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tqdb-0.1.1-cp311-cp311-macosx_11_0_arm64.whl (812.2 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

tqdb-0.1.1-cp311-cp311-macosx_10_12_x86_64.whl (834.2 kB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

tqdb-0.1.1-cp310-cp310-win_amd64.whl (714.7 kB view details)

Uploaded CPython 3.10Windows x86-64

tqdb-0.1.1-cp310-cp310-manylinux_2_28_aarch64.whl (883.6 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

tqdb-0.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (907.6 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tqdb-0.1.1-cp310-cp310-macosx_11_0_arm64.whl (814.3 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

tqdb-0.1.1-cp310-cp310-macosx_10_12_x86_64.whl (836.9 kB view details)

Uploaded CPython 3.10macOS 10.12+ x86-64

File details

Details for the file tqdb-0.1.1.tar.gz.

File metadata

  • Download URL: tqdb-0.1.1.tar.gz
  • Upload date:
  • Size: 407.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.1.1.tar.gz
Algorithm Hash digest
SHA256 8d41a20ca46559cfaf3837aa1ac05f829b0798946ddfa4afc7d00c6bac82f837
MD5 5398cafb39ac50a96e9fc128a422d917
BLAKE2b-256 5f352777087e30fdcf7b2b010aa1f8ecbca0bcc768f90d3dd00b19f6fe8691b4

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.1.1-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 710.3 kB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.1.1-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 17f5f150dedeca71688e4b9529b8b92c01c9eebc3ade79a0af740bf79b3249c2
MD5 476e2788228840cd0104ce68b5eefe07
BLAKE2b-256 194739acd0361e39c48a92a8c96d10d4cf4009726dbd271128804f5c6c84873a

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5b7102c22e4ce39a7db488e0b30e5e326fcb743f7397571bc91c3c8ea7c36f81
MD5 06ddade7b1688914ee5ad6e2c1112688
BLAKE2b-256 c8e10002a366d76e9df3ff7593a390735f8b813515710c9802640d70e5188946

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 decaa5835357951c5e406837e180dd8b919859fca66974364250a5b25fbe57e7
MD5 a5d919459a199c4d834801134358510b
BLAKE2b-256 3786e0bf1c6f645db250d20fcd0a1fb76f77ef5847f705a5bc8253d92f518180

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3b36e6e5e54d82396b6dbf93e437da9bb01efb7229dbed7f0fc8da1ed7cb668b
MD5 77b908d574ece4128842a846b78a6dd8
BLAKE2b-256 1311b03226c4f9a48bbe49d99e95dc649b9bed737031d5f86dbd1d87ccf5172d

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 78495ae21b0cc392c09b6d88bd732473dc7a7e11e4ed07887ec792eab07abcd3
MD5 ac6f01b2665e0ba77af1fb33c71de931
BLAKE2b-256 407e98684f2c98988381b05d9807c7ff465b737c1fd1f60ee03c22ed6590f4aa

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.1.1-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 710.3 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.1.1-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 55e4f3c2846dea82acc9abee1f445ca8cfd10a0bac34da5ecc23887430e1b31a
MD5 0ed65d8e6e591aa810aeefd091224842
BLAKE2b-256 fbc770ab785e294861770da887197512b2e4ccb218c62af197030697e9031b41

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 f7806ccb18c31c8a6eac99d954123cdbfb35f19bbf023f70e860bcf66f1dbf50
MD5 5cde3344fb4744a97af93df6d9fef178
BLAKE2b-256 44d5b625276d6010db4b0cc3008f17c7b6fc0ec7ecec74a53b1f969753dc1912

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 fff5f3f014ce0a393934ed373027aae29e7e9b452eb2a85aa6b79961a6d4404c
MD5 1d6b190b3e85c7f054825b5f63488474
BLAKE2b-256 9d76c7fd25e01843731d50cf6f81fc8c4bc4c87c45691bba7af7ce7f2656b4c8

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 7744f2b1f87579766d853f9400e9e5af0e58dcae6d2549edf911eba8876a3784
MD5 f2e0daa1eaf145a3bd03e8e585649ea8
BLAKE2b-256 b7161f2a586da695d0fa8cc3d897dd905739a16920cffea50a962b57d2fcbc81

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 1293525d9683bd81b83f19f0537f9db96fb5ac9ad060ea588ef4b48d6c67ad52
MD5 9909a6c2e691c1f023c102f531fb2720
BLAKE2b-256 0d363ef0cd49c92dc510057ef1b3556de597c990b51e72c8832ae413ca3e1b06

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.1.1-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 712.1 kB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.1.1-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 fb1a18d063f752e5a2e79f7c4f2dba785692665dccd3a1c78b735092bd26aa87
MD5 b1a0183bf50e32a95eb69b36feb4b1f5
BLAKE2b-256 44cbc21425b82129dc8dd8cd3e01a4a32c5e00fa2fd75cdc915487078c16e8b5

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 ad711a69ab59368c7a311b5c8c2ef926b20b575a45ebef7c6ae87fdbffa9d034
MD5 13ac45cc78561be9d9d932d0c09faedc
BLAKE2b-256 eb2aad19989b25f048df07f41ad41f45eaf3a210fae4d55e267ecd40ed961034

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 84d2964fef4c410b1bc25b5655e073138576c7c8c4053029d7614aa29f227059
MD5 270c25deb080f3a8c1320b161469fcbe
BLAKE2b-256 632897cc4800bc51693e37d9adcc9819f213fe96c19a62556662421df428b323

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8072fbf19f93f8af16f83645f4f9d0ceb1eca06d8de4f9370c58e240a44d65f2
MD5 9cbb98faf9b9fc0ec70d1a7d183c9604
BLAKE2b-256 894f40c78f9564d0c0d8117e23a0974225ffc6a9e000d80c09646e5a27b290c0

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 330a1ba3a722351fefe222242c462c5826164ccc0e823fed7551f656dffc13e2
MD5 806ea601e57b7ec94cce5e38feebc1ce
BLAKE2b-256 cfb0a9240b7b89ff2e03c35d57938b20138167261f8a24f69f46d14ed4fbeb56

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.1.1-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 714.7 kB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for tqdb-0.1.1-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 8094aa923dd4f8a71890829c952923ed06a54cf45e87274ef57162d307b915db
MD5 26c888fd6ec3b9072ac82c868677ed02
BLAKE2b-256 859cdf801b19d859881d1c6aa2f1c487ed5441db3be6f1122cbc85739d0e1d9f

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 3cf68833a8299a3347483a6a6385c33aab666ecfa65391350ab9045bcda1fd1a
MD5 19a700bc51036f183b4b7ab6a4a391fd
BLAKE2b-256 7eacf945e8b821a5a3a07146351f1f066b5c298534563d9e06ae015e746279a5

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 0e195c4dd44aa19e9d3b6b942238be89b0fd28c78fa1da4ffe1e97d39ec8e1ee
MD5 26bffddd5ac42aedb0f206cd2b52142c
BLAKE2b-256 0d4d8f3d36c11cbe0730af5907eff24a36ea851e6beee6514f0ef116b20b2801

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 14bd9c27c44916ed5918e55843489236754d3fec34079f261afec0b96707558d
MD5 7cd6b539cb41d02f718ef32e166247ef
BLAKE2b-256 535e848fa04862b4d2b7da3315fc2cacbb8bb0d002810871bb5c85b5d4597bfd

See more details on using hashes here.

File details

Details for the file tqdb-0.1.1-cp310-cp310-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.1.1-cp310-cp310-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 f63a0270c0ae30e7d27cdcf6fdb3abee825ca9fe20df6663ec422e22e22ab7a0
MD5 6ba8b479d5db4ed29adac42e5ebdc8f8
BLAKE2b-256 ce559b8ae203cba2679f2a552b0e5106e7da5244729505569edb8f2f9db55a61

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page