Skip to main content

"Disk-based redis" - Built around LMDB and Rust, with sharding for maximum throughput

Project description

⚡ lightning-disk-kv

This project is an absurdly fast, sharded Key-Value storage engine designed for high-throughput Python applications.

It is a drop-in solution for machine learning pipelines that need to store millions of embeddings (or other data type) samples efficiently. It solves the Global Interpreter Lock (GIL) bottleneck by offloading hashing, serialization, and disk I/O to parallel Rust threads.

🚀 Key Features

  • True Parallelism: Writes to multiple LMDB shards simultaneously using all CPU cores.
  • Zero-Copy Vectors: Specialized "Fast Path" for numpy arrays that writes raw bytes to disk (no pickling).
  • Generic Storage: Capable of storing arbitrary Python objects (Strings, Dicts, Lists) via optimized parallel pickling.
  • Crash Safe: Based on LMDB (Lightning Memory-Mapped Database), offering proven reliability.
  • Redis Compatible: Includes a wrapper that mimics the redis-py API for easy integration.

📦 Installation

Option A: Install via Pip (Recommended)

pip install lightning_disk_kv

Option B: Build from Source

If you are modifying the Rust code or building for a specific architecture:

# Requires Rust and Maturin
maturin develop --release

⚡ Usage Guide

1. Initialization

Initialize the database by specifying a base directory. The storage engine automatically handles sharding (splitting data across multiple files) to maximize write speed.

from lightning_disk_kv import LDKV

# Initialize with 5 shards.
# 'map_size' is the maximum virtual memory size. 
# It does NOT consume this amount of RAM immediately.
# Default is ~1TB, which is safe for 64-bit systems.
db = LDKV(
    base_path="./my_database", 
    num_shards=5, 
    map_size=100 * 1024**3  # 100 GB limit
)

2. Storing Vectors (The "Fast Path")

Use store_vectors when dealing with Numpy embeddings. This bypasses Python's overhead entirely by reading memory directly from C-pointers.

Requirement: Data must be np.float32.

import numpy as np

# Create dummy data
ids = [1, 2, 3]
vectors = np.random.rand(3, 128).astype(np.float32)

# Store in parallel
db.store_vectors(vectors, ids)

# Retrieve
# Returns a list of numpy arrays, or None if the ID doesn't exist
results = db.get_vectors([1, 999])

print(results[0].shape)  # (128,)
print(results[1])        # None

3. Storing Objects (The "Generic Path")

Use store_data for strings, dictionaries, images, or lists. While this uses pickle internally, the serialization and disk writing happen in parallel threads, making it significantly faster than standard loops.

ids = [100, 101]
data = [
    "A simple string", 
    {"key": "value", "meta": [1, 2, 3]}
]

db.store_data(data, ids)

results = db.get_data([100])
print(results[0]) # "A simple string"

4. Redis Compatibility API

We provide a redis-py compatible wrapper. This allows you to use lightning-disk-kv as an embedded, persistent Redis replacement without running a separate server process.

from lightning_redis import LDKV_RedisCompat

# Initialize (replaces host/port with a file path)
r = LDKV_RedisCompat(base_path="./redis_data", decode_responses=True)

# Basic Key-Value
r.set('foo', 'bar')
print(r.get('foo'))  # 'bar'

# TTL (Time To Live) - key automatically removed after 5 seconds
r.set('temp_key', 'hidden', ex=5)

# Atomic Counters
r.incr('visitor_count', amount=1)

# Hash Maps
r.hset('user:100', mapping={'name': 'Alice', 'role': 'admin'})
print(r.hgetall('user:100')) # {'name': 'Alice', 'role': 'admin'}

5. Management & Syncing

# Check total number of items across all shards
count = db.get_data_count()
print(f"Total items: {count}")

# Delete items
db.delete_data([1, 100])

# Force flush to disk
# The engine uses OS buffers for maximum speed. 
# Call .sync() to ensure data is physically written to the drive.
db.sync()

⚠️ Configuration & Safety

Understanding map_size

LMDB uses a memory map. You must set map_size larger than the maximum data you ever intend to store.

  • Don't worry about RAM: Setting this to 1TB does not use 1TB of RAM. It simply reserves virtual address space.
  • Error handling: If you exceed this limit, you will get a MapFull error.

Durability vs. Speed

To achieve maximum throughput, lightning_disk_kv sets the MDB_NOSYNC flag by default.

  • Application Crash: Data is safe.
  • OS Crash / Power Cut: Data currently in the OS buffer (last few seconds) might be lost.
  • Best Practice: If data durability is critical (e.g., you can't re-generate the data), call db.sync() periodically or after a large bulk insert.

🛠 Building from Source (Advanced)

If you cannot install via pip, you must compile the Rust backend manually.

  1. Install Rust:
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    
  2. Install the builder:
    pip install maturin
    
  3. Compile: Navigate to the project root and run:
    maturin develop --release
    

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lightning_disk_kv-1.0.0.tar.gz (19.4 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

lightning_disk_kv-1.0.0-cp38-abi3-win_amd64.whl (275.5 kB view details)

Uploaded CPython 3.8+Windows x86-64

lightning_disk_kv-1.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (456.2 kB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ x86-64

lightning_disk_kv-1.0.0-cp38-abi3-macosx_10_12_x86_64.whl (405.8 kB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file lightning_disk_kv-1.0.0.tar.gz.

File metadata

  • Download URL: lightning_disk_kv-1.0.0.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.10.2

File hashes

Hashes for lightning_disk_kv-1.0.0.tar.gz
Algorithm Hash digest
SHA256 89b8fa145a994e3b59d10b137e860df754b96b5c42d64ea2cb677e30bbc9b65c
MD5 bb96a27cc3fd1d0ed3905ec5bf68c9a6
BLAKE2b-256 19c8eb75eff60627490c398159138e7410bfb208880da37272bd1e29a2ea5787

See more details on using hashes here.

File details

Details for the file lightning_disk_kv-1.0.0-cp38-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for lightning_disk_kv-1.0.0-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 951f840597b4087da7f500f5ae5d673b812479db3fa62c8b74b85f1aac0855ec
MD5 5589d9837e1926804a2f38db606f8f7e
BLAKE2b-256 ccd477f21d42581aa726d28e4c53f4fbd322158246bb4d63175c18998b27a867

See more details on using hashes here.

File details

Details for the file lightning_disk_kv-1.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lightning_disk_kv-1.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4e4534580551584de99e280c3503d38ba8fe95e4dfafd7bc014254160c987901
MD5 27faf97657c0c6b07ee879e5a4f50edb
BLAKE2b-256 7b3a352f445308f2e7b7c39f52b5b09c4c7d3ac78625f460a22bba2c710a8ccd

See more details on using hashes here.

File details

Details for the file lightning_disk_kv-1.0.0-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for lightning_disk_kv-1.0.0-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 9417c53987a7fb8c347cc22bb8b485d79a323bd26b6f424d2c99bd55abd9b93d
MD5 f8fd43ffb2b3e64f6ffea0a14a478929
BLAKE2b-256 2386853c0316950cc816825428cc0faa45e766fa1a4e5c826028a4ba93b9b7cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page