Skip to main content

Embedded vector database using the TurboQuant algorithm (arXiv:2504.19874) — zero training, 2-4 bit compression, fast inner-product search

Project description

TurboQuantDB

License PyPI

An embedded vector database with a Python API, built around the TurboQuant algorithm (arXiv:2504.19874) — two-stage quantization that achieves near-optimal vector compression with zero training time.

Goal: make massive embedding datasets practical on lightweight hardware. A 100k-vector, 1536-dim collection that would occupy 586 MB as raw float32 fits in 108 MB on disk with TQDB b=4, or just 59 MB with b=2 — enabling laptop-scale RAG over millions of documents without a dedicated server.

Two deployment modes:

  • Embeddedtqdb Python package (pip install tqdb), runs in-process (no daemon)
  • Server — Axum HTTP service in server/, with multi-tenancy, RBAC, quotas, and async jobs

Key Properties

  • Zero training — No train() step. Vectors are quantized and stored immediately on insert.
  • 5–10× compression — b=4 reduces 1536-dim float32 embeddings from 586 MB to 108 MB (5.4×); b=2 reaches 59 MB (9.9×) at 100k vectors.
  • Two quantizer modes — default (dense, best recall) and a faster ingest variant (srht) for streaming/high-d workloads. See docs/QUANTIZER_MODES.md for a full breakdown.
  • Optional ANN index — Build an HNSW graph after loading data for fast approximate search.
  • Hybrid retrieval — Built-in BM25 keyword index fuses with dense search via RRF (db.search(..., hybrid={"text": "..."})). Pure-dense behaviour is unchanged when the kwarg is omitted.
  • Metadata filtering — MongoDB-style filter operators on any metadata field.
  • Crash recovery — Write-ahead log (WAL) ensures durability without explicit flushing.
  • Python nativepip install tqdb; no server or sidecar required.

Installation

pip install tqdb

Building from source (Rust toolchain required): see DEVELOPMENT.md.


Config Advisor

The interactive Config Advisor selects the best configuration for your embedding dimension and use case (RAG, search-at-scale, edge deployment, etc.), scored against real benchmark data with adjustable priority weights for recall, compression, and speed.

Config Advisor


Recommended Setup

rerank=True stores raw INT8 vectors alongside compressed codes for exact second-pass rescoring. fast_mode=True (default) uses MSE-only quantization — optimal for d < 1536.

from tqdb import Database

# Best recall, any dimension — brute-force
db = Database.open(path, dimension=DIM, bits=4, rerank=True)   # INT8 rerank storage
results = db.search(query, top_k=10)
# GloVe-200 (d=200):     R@1 ≈ 1.00  |  ~30 MB disk
# arXiv-768 (d=768):     R@1 ≈ 0.98  |  ~116 MB disk
# DBpedia-1536 (d=1536): R@1 ≈ 0.95  |  ~231 MB disk

# Best recall, high-d (d ≥ 1536) — also enable QJL residuals
db = Database.open(path, dimension=1536, bits=4, rerank=True, fast_mode=False)

# Minimum disk — MSE codes only (library default, no extra vector storage)
db = Database.open(path, dimension=DIM, bits=4)

# Low latency at N ≥ 100k — HNSW index
db = Database.open(path, dimension=DIM, bits=4, rerank=True)
db.create_index()
results = db.search(query, top_k=10, _use_ann=True)       # p50 < 10ms

# Tune rerank oversampling at query time (default 10×)
results = db.search(query, top_k=10, rerank_factor=20)    # higher recall, higher latency

Full configuration guide: docs/CONFIGURATION.md | Python API: docs/PYTHON_API.md


Quick Start

import numpy as np
from tqdb import Database

db = Database.open("./my_db", dimension=1536, bits=4, metric="ip", rerank=True)

db.insert("doc-1", np.random.randn(1536).astype("f4"), metadata={"topic": "ml"}, document="Machine learning intro")
db.insert("doc-2", np.random.randn(1536).astype("f4"), metadata={"topic": "systems"}, document="Rust memory model")

results = db.search(np.random.randn(1536).astype("f4"), top_k=5)
for r in results:
    print(r["id"], r["score"], r["document"])

Python API

Full reference: docs/PYTHON_API.md

# Open / create
db = Database.open(path, dimension, bits=4, seed=42, metric="ip",
                   rerank=True, fast_mode=False, rerank_precision=None,
                   collection=None, wal_flush_threshold=None,
                   quantizer_type=None)  # None/"dense" = default (Haar QR + Gaussian); "srht" = fast O(d log d) ingest
# NOTE: rerank=True with rerank_precision=None uses per-vector-scaled INT8 reranking (default),
#       which is approximate. Use rerank_precision="f16" or "f32" for higher-precision rescoring.
#       rerank_factor (default 10× brute / 20× ANN) controls oversampling.

# Write
db.insert(id, vector, metadata=None, document=None)
db.insert_batch(ids, vectors, metadatas=None, documents=None, mode="insert")  # "insert"|"upsert"|"update"
db.upsert(id, vector, metadata=None, document=None)
db.update(id, vector, metadata=None, document=None)        # RuntimeError if not found
db.update_metadata(id, metadata=None, document=None)       # RuntimeError if not found

# Delete & retrieve
db.delete(id)                        # → bool
db.delete_batch(ids)                 # → int (count deleted)
db.get(id)                           # → {id, metadata, document} | None
db.get_many(ids)                     # → list[dict | None]
db.list_all()                        # → list[str]
db.list_ids(where_filter=None, limit=None, offset=0)       # paginated
db.count(filter=None)                # → int
db.stats()                           # → dict
len(db) / "id" in db                 # container protocol

# Search — brute-force by default; pass _use_ann=True to use HNSW index
results = db.search(query, top_k=10, filter=None, _use_ann=False,
                    ann_search_list_size=None, rerank_factor=None, include=None,
                    nprobe=None,        # nprobe=N activates IVF routing (see create_coarse_index)
                    hybrid=None)        # hybrid={"text": "...", "weight": 0.5} = sparse+dense via RRF
# include: list of "id"|"score"|"metadata"|"document" (default all)
# ann_search_list_size: HNSW ef_search override (only used when _use_ann=True)
# rerank_factor: candidate oversampling multiplier (default 10 brute / 20 ANN)

# Hybrid (sparse BM25 + dense) — recovers keyword/exact-match queries dense alone misses
results = db.search(query, top_k=10,
                    hybrid={"text": "user query string", "weight": 0.3, "rrf_k": 60})

all_results = db.query(query_embeddings, n_results=10, where_filter=None,
                       rerank_factor=None, include=None,
                       hybrid=None)  # also accepts hybrid={"texts": [str], ...} for per-row text
# query_embeddings: np.ndarray (N, D) — returns list[list[dict]]

# Manual maintenance checkpoint (WAL flush + segment compaction)
db.checkpoint()

# Index
db.create_index(max_degree=32, ef_construction=200, n_refinements=5,
                search_list_size=128, alpha=1.2)

# IVF coarse routing (fast approximate search at large N)
db.create_coarse_index(n_clusters=256)          # build once after loading data
results = db.search(query, top_k=10, nprobe=16) # score ~6% of corpus

# Metadata filter operators — $in/$nin/$or use index fast paths (O(1) per field)
# $eq $ne $gt $gte $lt $lte $in $nin $exists $and $or $contains
db.search(query, top_k=5, filter={"year": {"$gte": 2023}})
db.search(query, top_k=5, filter={"$and": [{"topic": "ml"}, {"year": {"$gte": 2023}}]})
db.search(query, top_k=5, filter={"topic": {"$in": ["ml", "systems"]}})    # O(1) indexed

Dataset Recovery (WAL)

TurboQuantDB replays wal.log automatically on reopen. For a local crash/power-loss recovery:

  1. Stop all writers to the DB directory.
  2. Make a copy of the DB folder (manifest.json, live_codes.bin, live_ids.bin, wal.log, etc.).
  3. Reopen the DB normally:
    db = Database.open("./my_db")
    
  4. Validate state:
    • db.stats()["vector_count"]
    • sample db.get(...) / db.search(...)
  5. Persist a clean post-recovery state:
    db.checkpoint()   # flush WAL + compact
    db.close()
    

If files are corrupted beyond WAL replay, restore from a snapshot/backup copy (server mode also supports snapshot/restore jobs; see docs/SERVER_API.md).


Benchmarks

Three datasets, 100k vectors each, matching arXiv:2504.19874 Figure 5. Benchmark config: quantizer_type=None (dense), fast_mode=True, rerank=True (MSE-only, matching paper Figure 5 bit allocation).

Benchmark recall curves — TQDB vs paper

Key results at 100k × d=1536 (DBpedia), brute-force, b=4, rerank=True:

Metric Value
Recall@1 92.2%
Recall@4 99.9%
Disk 108 MB (5.4× compression)
p50 latency ~51ms

Full tables (all 8 configs × 3 datasets), ANN guidance, and reproduction steps: docs/BENCHMARKS.md

Rerank unlocks recall at any bit depth

bits=2, rerank=True matches bits=4, rerank=True recall while using ~10% less disk, and outperforms bits=4, rerank=False at lower disk cost. (bit_sweep, n=10k, brute-force, fast_mode=True)

Dataset b=2, no rerank b=4, no rerank b=2 + rerank b=4 + rerank
GloVe-200 (d=200) 0.528 (1.8 MB) 0.822 (2.3 MB) 0.992 (3.8 MB) 0.992 (4.2 MB)
arXiv-768 (d=768) 0.426 (7.4 MB) 0.696 (9.2 MB) 0.978 (14.7 MB) 0.978 (16.6 MB)
GIST-960 (d=960) 0.294 (10.4 MB) 0.566 (12.7 MB) 0.974 (19.6 MB) 0.974 (21.9 MB)

Coverage across d=65–3072

R@1 ≥ 0.87 across all 9 benchmark datasets at b=4, rerank=True, brute-force, fast_mode=True, n=10k:

Dataset d R@1 Disk p50
lastfm-64 65 0.874 2.0 MB 1.1 ms
deep-96 96 0.980 2.5 MB 1.2 ms
glove-100 100 0.990 2.6 MB 1.4 ms
glove-200 200 0.992 4.2 MB 1.7 ms
nytimes-256 256 0.992 5.2 MB 2.0 ms
arXiv-768 768 0.978 16.6 MB 7.6 ms
GIST-960 960 0.974 21.9 MB 7.3 ms
DBpedia-1536 1536 0.998 41.1 MB 10.3 ms
DBpedia-3072 3072 1.000 117.0 MB 46.8 ms

RAG Integration

from tqdb.rag import TurboQuantRetriever

retriever = TurboQuantRetriever(db_path="./rag_db", dimension=1536, bits=4)
retriever.add_texts(texts=texts, embeddings=embeddings, metadatas=metadatas)

results = retriever.similarity_search(query_embedding=query_vec, k=5)
for r in results:
    print(r["score"], r["text"])

Server Mode

An optional Axum HTTP server in server/ adds multi-tenancy, RBAC, and async jobs. See docs/SERVER_API.md for setup, launch, and the full API reference.

For disaster recovery beyond local WAL replay, see Server Recovery Runbook (Snapshot/Restore) in docs/SERVER_API.md.


Research Basis

This is an independent implementation of ideas from the TurboQuant paper. The algorithm itself was authored by the original researchers.

Zandieh, A., Daliri, M., Hadian, M., & Mirrokni, V. (2025). TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate. arXiv:2504.19874

@article{zandieh2025turboquant,
  title={TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate},
  author={Zandieh, Amir and Daliri, Majid and Hadian, Majid and Mirrokni, Vahab},
  journal={arXiv preprint arXiv:2504.19874},
  year={2025}
}

License

Apache License 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tqdb-0.7.0.tar.gz (1.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tqdb-0.7.0-cp313-cp313-win_amd64.whl (4.2 MB view details)

Uploaded CPython 3.13Windows x86-64

tqdb-0.7.0-cp313-cp313-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

tqdb-0.7.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

tqdb-0.7.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.13macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

tqdb-0.7.0-cp312-cp312-win_amd64.whl (4.2 MB view details)

Uploaded CPython 3.12Windows x86-64

tqdb-0.7.0-cp312-cp312-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

tqdb-0.7.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tqdb-0.7.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.12macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

tqdb-0.7.0-cp311-cp311-win_amd64.whl (4.2 MB view details)

Uploaded CPython 3.11Windows x86-64

tqdb-0.7.0-cp311-cp311-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

tqdb-0.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tqdb-0.7.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.11macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

tqdb-0.7.0-cp310-cp310-win_amd64.whl (4.2 MB view details)

Uploaded CPython 3.10Windows x86-64

tqdb-0.7.0-cp310-cp310-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

tqdb-0.7.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tqdb-0.7.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.7 MB view details)

Uploaded CPython 3.10macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

File details

Details for the file tqdb-0.7.0.tar.gz.

File metadata

  • Download URL: tqdb-0.7.0.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.7.0.tar.gz
Algorithm Hash digest
SHA256 44494d443421de9bbd95f3764cc35b9f14cd6bf3b6930511f31568f7bf1206bd
MD5 ab9946145aacba458a6da5305626d2f2
BLAKE2b-256 45ffbf9e0bcd8ead097549f7848ca6b09ba7613d70a0fac5efd80da7a30cc3b2

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.7.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 4.2 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.7.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 ec50e913d2e4602e4557a4acc198ad840e949795dcac7471febab37a8dfd3740
MD5 3542c3d2dcb7082afecf6d7467df07d8
BLAKE2b-256 e55d4bd750bd09720dee9d0cfca84ca7a4c797ed243f51b6b2f760bf5304f977

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 96e164ec6caa0394de9c7a8f58b705d516f8866db17466ebfe1aa7dcbf22809e
MD5 fda6da9a24d38c6898f58bd1b4f6e891
BLAKE2b-256 0db905790702cef149aafc89104b9131ff97e3490c42ae5778ed7026d364229c

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 571579718375b6f797daf9333ef991455f559930c4982638b90d439e14b11ec1
MD5 b6c83f5ad42c39c28d6c1ba30c9a9a1b
BLAKE2b-256 b40e730ce10870f9bfef2918b0d8241da259ee5b058cc7ac5e877c9c9ff578a6

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 92bad1f88001c4e919ba80714e51eeb450fcadb7a95441c24db89b4525e940d6
MD5 d13e0d75a4a6bfbc67e88f7e497d4778
BLAKE2b-256 1ca6ad1af994b77c6c670dc24d5e12205bf641b07ebb5ed8673d35a6785244ba

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.7.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 4.2 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.7.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 c350277f03982b41d8af361c53ffcbbd6c6e55a126f7e7e3d2db5ce951890bd5
MD5 9d8754e61a16939c05291da5ec434d09
BLAKE2b-256 5b1202a860bb30508d1d3cc0f80e3b4755e803d94b44d8dbb56900d4d4d2fb7c

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 e877addcdafd0f39715b09e7bce249ca13d4b4a7ac132a01f4a66a55940ee952
MD5 e01a01c9ff818627015b596e4ffa7726
BLAKE2b-256 05357aae2a3510878886a5690a7c78a4341628b3d62e412a274d182fe87d3497

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b78bdbef7bf1a165ad5667c0918b5f2b7ee5ec2a87f05a8e3c73aaba39e96a4b
MD5 25b2b2992a03d279ca49683e4036326e
BLAKE2b-256 48fc5e331c221f975882dda9edfed599f38cc570c4f5da954a4849f125a713b2

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 13d90371e8ff8205b5b03fd7a855b765ab8347c4de731006d94b774910474780
MD5 d28e0614c55a7e40f63a444900875fa5
BLAKE2b-256 7c6c40ec7447d6ecdbae8a9027bb740000880cf8249cd2facc9c47db5c6b3ea5

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.7.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 4.2 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.7.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 ca6bde67abcec6ec047ad0afc8424dfe757e806ae5c1e317356956baaa0587fb
MD5 9348f5bd7c57dffb70817b9b194322ad
BLAKE2b-256 a67557760e57d11be2912dddf4ca72afbea7862b0f398e26939786f811f5eded

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 b2eade3f33f7a1dabd7e9868a2b4a755bf7defa024373e79889c010daa4dd491
MD5 5c0f1cd6295bb53f9bc691e3d7de8598
BLAKE2b-256 3b84a52734706cc940f52786bb5482d74d09dbf787f2ca0315a3ab14e15948dc

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d882a14ad67a2a2841d86a55b111da4c6ba0b4aa54a003128f1edd8a5a4e6a0b
MD5 efa7162e771a0f6d859d562cd9d0b0e0
BLAKE2b-256 697675272fd050d18a67fc70b524715953654401241aafd47940f19d1c0e527c

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 261a6972a84d88643a199327c4b92c9613401412ea1edd30db9e423307213ccf
MD5 2f8a680715e914aed447dac0684deb52
BLAKE2b-256 03b75a5012061d865fcb5ca575231b54d0da2849f0b78aa9b9f15674c8372d1d

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tqdb-0.7.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 4.2 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.13.1

File hashes

Hashes for tqdb-0.7.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 948fc7284b6751793c427f6c8fa50a3714fc2511f468f4469e136e3fb8d0c0bf
MD5 c8d9f07643850e84ede9f6b19b160b8e
BLAKE2b-256 ff722cc68073c94c7c028d2602728fe59bcdd8a6ad14eff7ee6986ffd4fb3ecb

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 c19d081ff6251fbea1e098171737e0ca73b5bb110526cf1493471d8248ae425e
MD5 af090053fac3e73f6b41334a97644b54
BLAKE2b-256 bfc4d859665238b1b38b3a91dba5d81fa693522e738448c160fca916bb8deba4

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 018054e50aaf29235527cf894fc2df9cb3c74c7d3b571d41993509d0dcd127ef
MD5 ce7c3a8cee0eb1eda87cadf64f3ca7dd
BLAKE2b-256 2e3be6ab311319d389c8b27398e45c5a81b882339672aad66ad5d53ac53f87b3

See more details on using hashes here.

File details

Details for the file tqdb-0.7.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for tqdb-0.7.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 6ae16331baa92b922823d31c99a3cf4224a262cf203d44a297b40614a8ddeef2
MD5 db02098048e78c8e323d12b67454e63b
BLAKE2b-256 f6385007e460b420d726c1b9033085022e1e862cbe7516aa488b2df6d8651278

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page