Skip to main content

"Disk-based redis" - Built around LMDB and Rust, with sharding for maximum throughput

Project description

⚡ lightning-disk-kv

This project is an absurdly fast, sharded Key-Value storage engine designed for high-throughput Python applications.

It is a drop-in solution for machine learning pipelines that need to store millions of embeddings (or other data type) samples efficiently. It solves the Global Interpreter Lock (GIL) bottleneck by offloading hashing, serialization, and disk I/O to parallel Rust threads.

🚀 Key Features

  • True Parallelism: Writes to multiple LMDB shards simultaneously using all CPU cores.
  • Zero-Copy Vectors: Specialized "Fast Path" for numpy arrays that writes raw bytes to disk (no pickling).
  • Generic Storage: Capable of storing arbitrary Python objects (Strings, Dicts, Lists) via optimized parallel pickling.
  • Crash Safe: Based on LMDB (Lightning Memory-Mapped Database), offering proven reliability.
  • Redis Compatible: Includes a wrapper that mimics the redis-py API for easy integration.

📦 Installation

Option A: Install via Pip (Recommended)

pip install lightning_disk_kv

Option B: Build from Source

If you are modifying the Rust code or building for a specific architecture:

# Requires Rust and Maturin
maturin develop --release

⚡ Usage Guide

1. Initialization

Initialize the database by specifying a base directory. The storage engine automatically handles sharding (splitting data across multiple files) to maximize write speed.

from lightning_disk_kv import LDKV

# Initialize with 5 shards.
# 'map_size' is the maximum virtual memory size. 
# It does NOT consume this amount of RAM immediately.
# Default is ~1TB, which is safe for 64-bit systems.
db = LDKV(
    base_path="./my_database", 
    num_shards=5, 
    map_size=100 * 1024**3  # 100 GB limit
)

2. Storing Vectors (The "Fast Path")

Use store_vectors when dealing with Numpy embeddings. This bypasses Python's overhead entirely by reading memory directly from C-pointers.

Requirement: Data must be np.float32.

import numpy as np

# Create dummy data
ids = [1, 2, 3]
vectors = np.random.rand(3, 128).astype(np.float32)

# Store in parallel
db.store_vectors(vectors, ids)

# Retrieve
# Returns a list of numpy arrays, or None if the ID doesn't exist
results = db.get_vectors([1, 999])

print(results[0].shape)  # (128,)
print(results[1])        # None

3. Storing Objects (The "Generic Path")

Use store_data for strings, dictionaries, images, or lists. While this uses pickle internally, the serialization and disk writing happen in parallel threads, making it significantly faster than standard loops.

ids = [100, 101]
data = [
    "A simple string", 
    {"key": "value", "meta": [1, 2, 3]}
]

db.store_data(data, ids)

results = db.get_data([100])
print(results[0]) # "A simple string"

4. Redis Compatibility API

We provide a redis-py compatible wrapper. This allows you to use lightning-disk-kv as an embedded, persistent Redis replacement without running a separate server process.

from lightning_redis import LDKV_RedisCompat

# Initialize (replaces host/port with a file path)
r = LDKV_RedisCompat(base_path="./redis_data", decode_responses=True)

# Basic Key-Value
r.set('foo', 'bar')
print(r.get('foo'))  # 'bar'

# TTL (Time To Live) - key automatically removed after 5 seconds
r.set('temp_key', 'hidden', ex=5)

# Atomic Counters
r.incr('visitor_count', amount=1)

# Hash Maps
r.hset('user:100', mapping={'name': 'Alice', 'role': 'admin'})
print(r.hgetall('user:100')) # {'name': 'Alice', 'role': 'admin'}

5. Management & Syncing

# Check total number of items across all shards
count = db.get_data_count()
print(f"Total items: {count}")

# Delete items
db.delete_data([1, 100])

# Force flush to disk
# The engine uses OS buffers for maximum speed. 
# Call .sync() to ensure data is physically written to the drive.
db.sync()

⚠️ Configuration & Safety

Understanding map_size

LMDB uses a memory map. You must set map_size larger than the maximum data you ever intend to store.

  • Don't worry about RAM: Setting this to 1TB does not use 1TB of RAM. It simply reserves virtual address space.
  • Error handling: If you exceed this limit, you will get a MapFull error.

Durability vs. Speed

To achieve maximum throughput, lightning_disk_kv sets the MDB_NOSYNC flag by default.

  • Application Crash: Data is safe.
  • OS Crash / Power Cut: Data currently in the OS buffer (last few seconds) might be lost.
  • Best Practice: If data durability is critical (e.g., you can't re-generate the data), call db.sync() periodically or after a large bulk insert.

🛠 Building from Source (Advanced)

If you cannot install via pip, you must compile the Rust backend manually.

  1. Install Rust:
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    
  2. Install the builder:
    pip install maturin
    
  3. Compile: Navigate to the project root and run:
    maturin develop --release
    

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lightning_disk_kv-1.1.1.tar.gz (20.8 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

lightning_disk_kv-1.1.1-cp38-abi3-win_amd64.whl (310.0 kB view details)

Uploaded CPython 3.8+Windows x86-64

lightning_disk_kv-1.1.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (497.4 kB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ x86-64

lightning_disk_kv-1.1.1-cp38-abi3-macosx_11_0_arm64.whl (432.4 kB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

File details

Details for the file lightning_disk_kv-1.1.1.tar.gz.

File metadata

  • Download URL: lightning_disk_kv-1.1.1.tar.gz
  • Upload date:
  • Size: 20.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.10.2

File hashes

Hashes for lightning_disk_kv-1.1.1.tar.gz
Algorithm Hash digest
SHA256 b2c2bf84f056a6e94c16f54371a9865370b8d5bf0a11bfeab1c995e36de14a6d
MD5 a58d0202337093086a39572431040194
BLAKE2b-256 15c46659b3a8a6df9e90ca4da3e0352869a80bff720d0a2824cb29127e59123d

See more details on using hashes here.

File details

Details for the file lightning_disk_kv-1.1.1-cp38-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for lightning_disk_kv-1.1.1-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 77eb9d3741de1e4b70bdbe43d2e0307e842e04d5bfbfafb64f2ae325d8104577
MD5 cc707b4cc9334f94d6f981fadaefc2ec
BLAKE2b-256 401e81b582eabb58dc0e26683820460cb03ad77d8a15f5cb20de9d6c1e473f56

See more details on using hashes here.

File details

Details for the file lightning_disk_kv-1.1.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for lightning_disk_kv-1.1.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 722176275a59088cdb7740395da59c88b3c991a78363358eaa52eeb9a27d7755
MD5 bf4614ac469549b4611efa9a6a1b7074
BLAKE2b-256 1cc8b1d034593ecee1cda0235f4f329987e99f499fd50015f147b0d7cf57ead5

See more details on using hashes here.

File details

Details for the file lightning_disk_kv-1.1.1-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for lightning_disk_kv-1.1.1-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 0ab652af122eacb1e6403f9b61513a822fa66aae883a01b0e7e533aa03f0a91b
MD5 5d0ff0ac21ba2c548daf3c6c91fd07fc
BLAKE2b-256 84f9044fbd3ed8232d6cea3968369461bd3d128313494b9090bbb04eb76381a4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page