Skip to main content

A Python library for reading and writing compressed, sorted key-value stores with efficient lookup using a B-tree-like structure.

Project description

Map With Tree

A Python library for reading and writing compressed, sorted key-value stores with efficient lookup using a B-tree-like structure.

Features

  • Efficient Storage: Data is compressed using zstandard (zstd) with configurable compression levels
  • Fast Lookups: Built-in B-tree index for O(log n) key lookups
  • Fast Iteration: Sequential block iteration without tree traversal for O(n) scanning
  • Sorted Keys: Keys are automatically sorted during finalization for efficient range queries
  • Data Integrity: MD5 hash of all entries is computed and stored for verification
  • Flexible Types: Support for multiple key and value types including bytes, strings, integers, floats, and JSON
  • Memory Efficient: Block-based compression and caching minimize memory usage
  • Simple API: Pythonic interface with context managers and dict-like operations

Installation

pip install map_with_tree

Requires Python >= 3.8 and zstd >= 1.5.

Quick Start

Writing Data

import map_with_tree

# Create a new map file with string values
with map_with_tree.open("data.mwt", "w", values_type="string") as writer:
    writer.add_entry(b"key_1", "value_1")
    writer.add_entry(b"key_2", "value_2")
    writer.add_entry(b"key_3", "value_3")

Reading Data

import map_with_tree

# Open and read from a map file
with map_with_tree.open("data.mwt") as reader:
    # Get a value by key
    value = reader[b"key_1"]
    
    # Check if key exists
    if b"key_2" in reader:
        print("Key exists!")
    
    # Get with default value
    value = reader.get(b"key_99", default="not found")
    
    # Iterate over all entries
    for key, value in reader:
        print(f"{key}: {value}")
    
    # Get file metadata
    print(f"Total entries: {len(reader)}")
    print(f"Header: {reader.header}")

API Reference

Opening Files

map_with_tree.open(path, mode="r", **kwargs)
  • path: File path for the map file
  • mode: "r" for reading, "w" for writing
  • **kwargs: Additional options for writing (see Writer Options)

Writer Options

MapWithTreeWriter(
    path,
    header=None,              # Custom header metadata (dict)
    keys_type="bytes",        # Type for keys
    values_type="bytes",      # Type for values
    keys_per_node=128,        # Number of keys per B-tree node
    block_size=64*1024,       # Block size for compression (64KB default)
    compression_level=3       # zstd compression level (0-22)
)

Supported Types

  • bytes: Raw bytes (default)
  • string or str: UTF-8 encoded strings
  • int or i64: 64-bit signed integer
  • uint or u64: 64-bit unsigned integer
  • i8, i16, i32: Signed integers (8, 16, 32 bit)
  • u8, u16, u32: Unsigned integers (8, 16, 32 bit)
  • float or f64: 64-bit float
  • f32: 32-bit float
  • json: JSON-serializable objects
  • struct:format: Custom struct format (e.g., "struct:<IIf" for two unsigned ints and a float)

Writer Methods

writer.add_entry(key, value)  # Add a key-value pair
writer.finalize()             # Finalize the file (called automatically on context exit)
writer.close()                # Close file handles

Reader Methods

reader[key]                   # Get value by key (raises KeyError if not found)
reader.get(key, default=None) # Get value with default
key in reader                 # Check if key exists
len(reader)                   # Get number of entries
iter(reader)                  # Iterate over (key, value) pairs in insertion order
reader.sorted_keys()          # Iterate over keys in sorted order (tree traversal)
reader.header                 # Access header metadata
reader.close()                # Close file handle

File Format

Map With Tree (.mwt) files consist of:

  1. Magic Header: 8-byte signature (mwt\0\0\0\0\1)
  2. Header Offset: 8-byte pointer to compressed header
  3. Data Blocks: Compressed blocks containing key-value pairs with zstd
  4. B-tree Index: Tree structure pointing into blocks for efficient key lookups
  5. Compressed Header: JSON metadata with file statistics

Dual Access Modes

The format supports two efficient access patterns:

  • Random Access: Use the B-tree index for O(log n) lookup of specific keys
  • Sequential Iteration: Read blocks sequentially for O(n) full scans without tree traversal

This design enables:

  • Sequential writes for optimal I/O performance during creation
  • Minimal memory usage during both reading and writing
  • Fast random access through the B-tree index
  • Fast sequential iteration by reading blocks in order
  • Efficient compression with block-level granularity

Examples

Large Dataset

import map_with_tree
import uuid

# Write 100,000 entries
with map_with_tree.open("large.mwt", "w", values_type="string") as writer:
    for i in range(100000):
        key = f"key_{i:06d}".encode()
        value = uuid.uuid4().hex
        writer.add_entry(key, value)

# Read and check compression
with map_with_tree.open("large.mwt") as reader:
    print(f"Entries: {len(reader)}")
    uncompressed = reader.header["uncompressed_size"]
    compressed = reader.header["compressed_size"]
    print(f"Compression ratio: {compressed / uncompressed:.2%}")

Structured Data with JSON

import map_with_tree

with map_with_tree.open("users.mwt", "w", keys_type="string", values_type="json") as writer:
    writer.add_entry("user_1", {"name": "Alice", "age": 30, "city": "NYC"})
    writer.add_entry("user_2", {"name": "Bob", "age": 25, "city": "SF"})

with map_with_tree.open("users.mwt") as reader:
    user = reader[b"user_1"]
    print(f"{user['name']} is {user['age']} years old")

Custom Struct Types

import map_with_tree

# Values are tuples of (unsigned int, unsigned int, float)
with map_with_tree.open("metrics.mwt", "w", values_type="struct:<IIf") as writer:
    writer.add_entry(b"metric_1", (100, 200, 3.14))
    writer.add_entry(b"metric_2", (150, 250, 2.71))

with map_with_tree.open("metrics.mwt") as reader:
    count1, count2, ratio = reader[b"metric_1"]
    print(f"Counts: {count1}, {count2}, Ratio: {ratio}")

Efficient Iteration

The format stores both keys and values in the data blocks, enabling fast sequential iteration without traversing the B-tree:

import map_with_tree

# Iterate through all entries efficiently
with map_with_tree.open("data.mwt") as reader:
    # Sequential iteration reads blocks in order
    for key, value in reader:
        process(key, value)
    
    # This is faster than tree traversal for full scans
    # because it only decompresses blocks without tree lookups

The iteration order is the insertion order (before sorting), which allows you to:

  • Process all entries efficiently in O(n) time
  • Scan through data without random access overhead
  • Stream process large datasets with minimal memory

Sorted Key Iteration

If you need to iterate over keys in sorted order, use the sorted_keys() method which traverses the B-tree:

import map_with_tree

with map_with_tree.open("data.mwt") as reader:
    # Get keys in sorted order via tree traversal
    for key in reader.sorted_keys():
        value = reader[key]
        process(key, value)
    
    # Or just get the sorted keys
    all_sorted_keys = list(reader.sorted_keys())

Key differences between iteration modes:

  • iter(reader): Returns (key, value) pairs in insertion order, reads blocks sequentially
  • sorted_keys(): Returns keys only in sorted order, traverses the B-tree structure
  • Use regular iteration for full scans, use sorted_keys() for sorted access or range queries

Performance Tips

  1. Adjust block size: Larger blocks (e.g., 256KB) improve compression but use more memory
  2. Tune compression level: Lower levels (1-3) for speed, higher (10-22) for size
  3. Choose appropriate types: Use native types (int, float) instead of strings when possible
  4. Batch writes: Add all entries before finalizing to ensure optimal tree structure
  5. Keys per node: Increase for larger datasets to reduce tree height

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

map_with_tree-0.0.9.tar.gz (13.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

map_with_tree-0.0.9-py3-none-any.whl (11.5 kB view details)

Uploaded Python 3

File details

Details for the file map_with_tree-0.0.9.tar.gz.

File metadata

  • Download URL: map_with_tree-0.0.9.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for map_with_tree-0.0.9.tar.gz
Algorithm Hash digest
SHA256 05811f8dd9156a2bbdd0ba7c9d461b3b38c00736faa84b03eec068eb430e53e9
MD5 1f80ef4d07ec97d9790358f1d62795ba
BLAKE2b-256 3221f601e8397b8b6fcb48de306026183da192bfd870fe9babcd3a20b3dacf9a

See more details on using hashes here.

File details

Details for the file map_with_tree-0.0.9-py3-none-any.whl.

File metadata

  • Download URL: map_with_tree-0.0.9-py3-none-any.whl
  • Upload date:
  • Size: 11.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for map_with_tree-0.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 80e0e25a7450f779025cb6e21fa18b731dcf7df665925062822d0f553497aee1
MD5 923a2d00dd70d5a682f092bfacd2d82f
BLAKE2b-256 8a1cc304b6cb68e1e8d132f5a38f718a470f7d27aff280f2a3019b0a056dbbc9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page