Skip to main content

Embedded vector database using the TurboQuant algorithm (arXiv:2504.19874) — zero training, 2-4 bit compression, fast inner-product search

Project description

TurboQuantDB

License PyPI

An embedded vector database with a Python API, built around the TurboQuant algorithm (arXiv:2504.19874) — two-stage quantization that achieves near-optimal vector compression with zero training time.

Goal: make massive embedding datasets practical on lightweight hardware. A 100k-vector, 1536-dim collection that would occupy 586 MB as raw float32 fits in 108 MB on disk with TQDB b=4, or just 59 MB with b=2 — enabling laptop-scale RAG over millions of documents without a dedicated server.

Two deployment modes:

  • Embeddedtqdb Python package (pip install tqdb), runs in-process (no daemon)
  • Server — Axum HTTP service in server/, with multi-tenancy, RBAC, quotas, and async jobs

Key Properties

  • Zero training — No train() step. Vectors are quantized and stored immediately on insert.
  • 5–10× compression — b=4 reduces 1536-dim float32 embeddings from 586 MB to 108 MB (5.4×); b=2 reaches 59 MB (9.9×) at 100k vectors.
  • Two quantizer modes — default (dense, best recall) and a faster ingest variant (srht) for streaming/high-d workloads. See docs/QUANTIZER_MODES.md for a full breakdown.
  • Optional ANN index — Build an HNSW graph after loading data for fast approximate search.
  • Hybrid retrieval — Built-in BM25 keyword index fuses with dense search via RRF (db.search(..., hybrid={"text": "..."})). Pure-dense behaviour is unchanged when the kwarg is omitted.
  • Multi-vector / ColBERTMultiVectorStore for late-interaction retrieval with N token vectors per document and MaxSim scoring (Σ_i max_j <q_i, d_j>). See docs/MULTI_VECTOR.md.
  • Framework integrations — native VectorStore for LangChain v2 and LlamaIndex, plus an asyncio-friendly AsyncDatabase.
  • Drop-in migrationpython -m tqdb.migrate {chroma|lancedb} <src> <dst> imports an existing collection in one command. See docs/MIGRATION.md.
  • Metadata filtering — MongoDB-style filter operators on any metadata field.
  • Crash recovery — Write-ahead log (WAL) ensures durability without explicit flushing.
  • Python nativepip install tqdb; no server or sidecar required.

Installation

pip install tqdb

Building from source (Rust toolchain required): see DEVELOPMENT.md.


Config Advisor

The interactive Config Advisor selects the best configuration for your embedding dimension and use case (RAG, search-at-scale, edge deployment, etc.), scored against real benchmark data with adjustable priority weights for recall, compression, and speed.

Config Advisor


Recommended Setup

rerank=True stores raw INT8 vectors alongside compressed codes for exact second-pass rescoring. fast_mode=True (default) uses MSE-only quantization — optimal for d < 1536.

from tqdb import Database

# Best recall, any dimension — brute-force
db = Database.open(path, dimension=DIM, bits=4, rerank=True)   # INT8 rerank storage
results = db.search(query, top_k=10)
# GloVe-200 (d=200):     R@1 ≈ 1.00  |  ~30 MB disk
# arXiv-768 (d=768):     R@1 ≈ 0.98  |  ~116 MB disk
# DBpedia-1536 (d=1536): R@1 ≈ 0.95  |  ~231 MB disk

# Best recall, high-d (d ≥ 1536) — also enable QJL residuals
db = Database.open(path, dimension=1536, bits=4, rerank=True, fast_mode=False)

# Minimum disk — MSE codes only (library default, no extra vector storage)
db = Database.open(path, dimension=DIM, bits=4)

# Low latency at N ≥ 100k — HNSW index
db = Database.open(path, dimension=DIM, bits=4, rerank=True)
db.create_index()
results = db.search(query, top_k=10, _use_ann=True)       # p50 < 10ms

# Tune rerank oversampling at query time (default 10×)
results = db.search(query, top_k=10, rerank_factor=20)    # higher recall, higher latency

Full configuration guide: docs/CONFIGURATION.md | Python API: docs/PYTHON_API.md


Quick Start

import numpy as np
from tqdb import Database

db = Database.open("./my_db", dimension=1536, bits=4, metric="ip", rerank=True)

db.insert("doc-1", np.random.randn(1536).astype("f4"), metadata={"topic": "ml"}, document="Machine learning intro")
db.insert("doc-2", np.random.randn(1536).astype("f4"), metadata={"topic": "systems"}, document="Rust memory model")

results = db.search(np.random.randn(1536).astype("f4"), top_k=5)
for r in results:
    print(r["id"], r["score"], r["document"])

Python API

Full reference: docs/PYTHON_API.md

# Open / create
db = Database.open(path, dimension, bits=4, seed=42, metric="ip",
                   rerank=True, fast_mode=False, rerank_precision=None,
                   collection=None, wal_flush_threshold=None,
                   quantizer_type=None)  # None/"dense" = default (Haar QR + Gaussian); "srht" = fast O(d log d) ingest
# NOTE: rerank=True with rerank_precision=None uses per-vector-scaled INT8 reranking (default),
#       which is approximate. Use rerank_precision="f16" or "f32" for higher-precision rescoring.
#       rerank_factor (default 10× brute / 20× ANN) controls oversampling.

# Write
db.insert(id, vector, metadata=None, document=None)
db.insert_batch(ids, vectors, metadatas=None, documents=None, mode="insert")  # "insert"|"upsert"|"update"
db.upsert(id, vector, metadata=None, document=None)
db.update(id, vector, metadata=None, document=None)        # RuntimeError if not found
db.update_metadata(id, metadata=None, document=None)       # RuntimeError if not found

# Delete & retrieve
db.delete(id)                        # → bool
db.delete_batch(ids)                 # → int (count deleted)
db.get(id)                           # → {id, metadata, document} | None
db.get_many(ids)                     # → list[dict | None]
db.list_all()                        # → list[str]
db.list_ids(where_filter=None, limit=None, offset=0)       # paginated
db.count(filter=None)                # → int
db.stats()                           # → dict
len(db) / "id" in db                 # container protocol

# Search — brute-force by default; pass _use_ann=True to use HNSW index
results = db.search(query, top_k=10, filter=None, _use_ann=False,
                    ann_search_list_size=None, rerank_factor=None, include=None,
                    nprobe=None,        # nprobe=N activates IVF routing (see create_coarse_index)
                    hybrid=None)        # hybrid={"text": "...", "weight": 0.5} = sparse+dense via RRF
# include: list of "id"|"score"|"metadata"|"document" (default all)
# ann_search_list_size: HNSW ef_search override (only used when _use_ann=True)
# rerank_factor: candidate oversampling multiplier (default 10 brute / 20 ANN)

# Hybrid (sparse BM25 + dense) — recovers keyword/exact-match queries dense alone misses
results = db.search(query, top_k=10,
                    hybrid={"text": "user query string", "weight": 0.3, "rrf_k": 60})

all_results = db.query(query_embeddings, n_results=10, where_filter=None,
                       rerank_factor=None, include=None,
                       hybrid=None)  # also accepts hybrid={"texts": [str], ...} for per-row text
# query_embeddings: np.ndarray (N, D) — returns list[list[dict]]

# Manual maintenance checkpoint (WAL flush + segment compaction)
db.checkpoint()

# Index
db.create_index(max_degree=32, ef_construction=200, n_refinements=5,
                search_list_size=128, alpha=1.2)

# IVF coarse routing (fast approximate search at large N)
db.create_coarse_index(n_clusters=256)          # build once after loading data
results = db.search(query, top_k=10, nprobe=16) # score ~6% of corpus

# Metadata filter operators — $in/$nin/$or use index fast paths (O(1) per field)
# $eq $ne $gt $gte $lt $lte $in $nin $exists $and $or $contains
db.search(query, top_k=5, filter={"year": {"$gte": 2023}})
db.search(query, top_k=5, filter={"$and": [{"topic": "ml"}, {"year": {"$gte": 2023}}]})
db.search(query, top_k=5, filter={"topic": {"$in": ["ml", "systems"]}})    # O(1) indexed

Dataset Recovery (WAL)

TurboQuantDB replays wal.log automatically on reopen. For a local crash/power-loss recovery:

  1. Stop all writers to the DB directory.
  2. Make a copy of the DB folder (manifest.json, live_codes.bin, live_ids.bin, wal.log, etc.).
  3. Reopen the DB normally:
    db = Database.open("./my_db")
    
  4. Validate state:
    • db.stats()["vector_count"]
    • sample db.get(...) / db.search(...)
  5. Persist a clean post-recovery state:
    db.checkpoint()   # flush WAL + compact
    db.close()
    

If files are corrupted beyond WAL replay, restore from a snapshot/backup copy (server mode also supports snapshot/restore jobs; see docs/SERVER_API.md).


Benchmarks

Three datasets, 100k vectors each, matching arXiv:2504.19874 Figure 5. Benchmark config: quantizer_type=None (dense), fast_mode=True, rerank=True (MSE-only, matching paper Figure 5 bit allocation).

Benchmark recall curves — TQDB vs paper

Key results at 100k × d=1536 (DBpedia), brute-force, b=4, rerank=True:

Metric Value
Recall@1 92.2%
Recall@4 99.9%
Disk 108 MB (5.4× compression)
p50 latency ~51ms

Full tables (all 8 configs × 3 datasets), ANN guidance, and reproduction steps: docs/BENCHMARKS.md

Rerank unlocks recall at any bit depth

bits=2, rerank=True matches bits=4, rerank=True recall while using ~10% less disk, and outperforms bits=4, rerank=False at lower disk cost. (bit_sweep, n=10k, brute-force, fast_mode=True)

Dataset b=2, no rerank b=4, no rerank b=2 + rerank b=4 + rerank
GloVe-200 (d=200) 0.528 (1.8 MB) 0.822 (2.3 MB) 0.992 (3.8 MB) 0.992 (4.2 MB)
arXiv-768 (d=768) 0.426 (7.4 MB) 0.696 (9.2 MB) 0.978 (14.7 MB) 0.978 (16.6 MB)
GIST-960 (d=960) 0.294 (10.4 MB) 0.566 (12.7 MB) 0.974 (19.6 MB) 0.974 (21.9 MB)

Coverage across d=65–3072

R@1 ≥ 0.87 across all 9 benchmark datasets at b=4, rerank=True, brute-force, fast_mode=True, n=10k:

Dataset d R@1 Disk p50
lastfm-64 65 0.874 2.0 MB 1.1 ms
deep-96 96 0.980 2.5 MB 1.2 ms
glove-100 100 0.990 2.6 MB 1.4 ms
glove-200 200 0.992 4.2 MB 1.7 ms
nytimes-256 256 0.992 5.2 MB 2.0 ms
arXiv-768 768 0.978 16.6 MB 7.6 ms
GIST-960 960 0.974 21.9 MB 7.3 ms
DBpedia-1536 1536 0.998 41.1 MB 10.3 ms
DBpedia-3072 3072 1.000 117.0 MB 46.8 ms

RAG Integration

from tqdb.rag import TurboQuantRetriever

retriever = TurboQuantRetriever(db_path="./rag_db", dimension=1536, bits=4)
retriever.add_texts(texts=texts, embeddings=embeddings, metadatas=metadatas)

results = retriever.similarity_search(query_embedding=query_vec, k=5)
for r in results:
    print(r["score"], r["text"])

Server Mode

An optional Axum HTTP server in server/ adds multi-tenancy, RBAC, and async jobs. See docs/SERVER_API.md for setup, launch, and the full API reference.

For disaster recovery beyond local WAL replay, see Server Recovery Runbook (Snapshot/Restore) in docs/SERVER_API.md.


Research Basis

This is an independent implementation of ideas from the TurboQuant paper. The algorithm itself was authored by the original researchers.

Zandieh, A., Daliri, M., Hadian, M., & Mirrokni, V. (2025). TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate. arXiv:2504.19874

@article{zandieh2025turboquant,
  title={TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate},
  author={Zandieh, Amir and Daliri, Majid and Hadian, Majid and Mirrokni, Vahab},
  journal={arXiv preprint arXiv:2504.19874},
  year={2025}
}

License

Apache License 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tqdb-0.8.0.tar.gz (1.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tqdb-0.8.0-cp313-cp313-win_amd64.whl (4.3 MB view details)

Uploaded CPython 3.13Windows x86-64

tqdb-0.8.0-cp313-cp313-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

tqdb-0.8.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

tqdb-0.8.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.13macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

tqdb-0.8.0-cp312-cp312-win_amd64.whl (4.3 MB view details)

Uploaded CPython 3.12Windows x86-64

tqdb-0.8.0-cp312-cp312-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

tqdb-0.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tqdb-0.8.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.12macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

tqdb-0.8.0-cp311-cp311-win_amd64.whl (4.3 MB view details)

Uploaded CPython 3.11Windows x86-64

tqdb-0.8.0-cp311-cp311-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

tqdb-0.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tqdb-0.8.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.11macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

tqdb-0.8.0-cp310-cp310-win_amd64.whl (4.3 MB view details)

Uploaded CPython 3.10Windows x86-64

tqdb-0.8.0-cp310-cp310-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

tqdb-0.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tqdb-0.8.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.10macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

File details

Details for the file tqdb-0.8.0.tar.gz.

File metadata

  • Download URL: tqdb-0.8.0.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.8.0.tar.gz
Algorithm Hash digest
SHA256 9f4738a75eca45456f51897679251d76dfb7b0de0ca296f0fe4b5661d931e215
MD5 c0b4aa49b45fa6f5a5c9fa5e73393692
BLAKE2b-256 4879bef01c7f0144930a2706ab4c6298f8379a9ec07863aaa4540e8daff8fd30

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.8.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 4.3 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.8.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 df225f74b817ba097d1f0a6c346039876cfdac259739f5eb9fb8ffc52f007118
MD5 76533ac25452c6ce50213caf9b21ca02
BLAKE2b-256 b3f5457eada600dd830e4f94607ff901ba7398553e00d4e1b60a4e79fa04bfb1

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 7ef2419a9937f0af81621a1cd46c040bccaafa9b6e4b522cfdd02671aea875fe
MD5 1b5d6820c3147074a8c362b19da5730d
BLAKE2b-256 5144236881ed02cb0d201b1a2649db4234098a173cf61d26e7bfb1ce362a5d2f

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 dbb6b1316af60c6661d9e649aa4214b2b683e7c128aaa2d01591eeb97dab8e36
MD5 e0f22178151bc625ea1257cf0f2c02d3
BLAKE2b-256 156cc8042218d393eb45d1af53cd78ed7bfd14a9affac81358feef9604e8bd27

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 4c8aafedb5ba8ab4151d61efb9c861f648c9d524216dd3e6fd1a5c2192cf3a26
MD5 aa302d69150887488db73b0dbb1787d2
BLAKE2b-256 c687bd37fd040076d8c8dfc1742a984940e14101a0eec5b8627aac732471da07

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.8.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 4.3 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.8.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 99d3b63fec70ac7a3ec041370dfe5e753714eba268abf0bf86e4444567ae5369
MD5 8d1bc1f31119f86ab22124187f91be70
BLAKE2b-256 de7a4fe1dc2b39fbb3ece39d075a043b3537cf203858ea6654fa23a6c4d00fcf

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 b6e7b98f85e5adebf455814464c8debee61caabccfef2f0438e9226966b9d1f9
MD5 fc5c7a380afd3eaf10a440ddff8c5f61
BLAKE2b-256 836948f14f66177c0d56ced6faa67a28a4ab4e4668fb35c7f3d75a18beb629a0

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6b3931fbdc16499a9b33e0a4b510475b9a025ff48024d6f35e77c585979d0c12
MD5 332f0ab8183e6984fef3500870dfa097
BLAKE2b-256 94cc5b38c5ebe9bd47c9cd1f0cd77342839889963ede198b571652290845f943

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 ea60c6ab2f5aa6b466ddef45a1b6b005995d0e61251e6392e674e612369f4b34
MD5 aa05423d0f6045d935dbe106500a6cca
BLAKE2b-256 a339ceb8303d87a9a5c4ad37d837e29f5442e6da6e3f1d69c24baab8d141f6cf

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.8.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 4.3 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.8.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 531b1c4573f35214a866c228f2206022a9721a6ec53af46a33e3a8b1fa0a30b4
MD5 16f71520ceede1fb0df26a1dc9d37260
BLAKE2b-256 595819e8c335a8ad15a1e69290ee96bfe868b0c78cadfb074d40e73835555716

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 6f7da387f110b508ac954cad01be8921597213878d1297fe431caa7581430504
MD5 b2b7c4bcd16ed0143268aa166c18ad78
BLAKE2b-256 c2902b05c0f99b48ff724b7d9155d49d590afa2ebd7471dce26f47615bf6ede3

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 84aa8aac900946b65918d508fca8bd52db13ea0f2702a50d4d1f833801171c62
MD5 3d0a2f4c33a236edcce805c8a017fbc9
BLAKE2b-256 a7dcfea8075640849ee9a4a8b113dd9537238e540b2669ce695bf3d2aaea476c

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 1d2a07de378631d9526c425c1199e13122fca69ee9b1ab1db2b7673932369b4a
MD5 681a7e3c06b4af88dab210c92afc2dfd
BLAKE2b-256 6da8a5c5ca73da14e169a7a085a547bfcc60b15ed894f63ec8b666589ed3cdfc

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.8.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 4.3 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.8.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 48a7c935db9e5d574518e4cba3bae6dc8541c23260191ecbcd091c9d466b80d4
MD5 f09f00c305b3d16d750d80c545ec74dc
BLAKE2b-256 07004a2a35bf28b7c09b5230f7736a32d470ee364545f1a9babe6ba226331672

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 b5b627c6668baad7cae9d42b0edb1c19ec25e8f24d18e7f33209eeb21580ee8f
MD5 3db8bebe5ee90931f0943eea6d0a4f8f
BLAKE2b-256 4e70dc43da637d2122d8bca024a55225a6b7dbc143b37e04858e372cd2e56982

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a41070eacf84801f21fb9b88ed793453799ffde207e2c824968699d39d570f0a
MD5 d54e352e4d4a48bd2723039c6c98ebd8
BLAKE2b-256 48395e69fef524befd09a1a3ebd76deb49af0f6dd99825b0040343cb6a63dd04

See more details on using hashes here.

File details

Details for the file tqdb-0.8.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.8.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 e6e4796e2ae027270ca8ec47a11df44d77e963decb808b9e8591274ce96f827f
MD5 302a94c84fc7a8739ef3df41e4b3c830
BLAKE2b-256 8142e2148955584b9c81169c297b53b8d3f986b30091dde0017f1e51762381a0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page