Skip to main content

Python bindings for Unum's UStore.

Project description

UStore

Modular 1 Multi-Modal 2 Transactional 3 Database
For Artificial Intelligence 4 and Semantic Search 5


Youtube     Discord     LinkedIn     Twitter     Blog     GitHub

1. supports: RocksDBLevelDBUDiskUCSet backends
2. can store: BlobsDocumentsGraphs • 🔜 Features • 🔜 Texts
3: guarantees AtomicityConsistencyIsolationDurability
4: comes with Pandas and NetworkX API and 🔜 PyTorch data-loaders
5: brings vector-search integrated with USearch and UForm

drivers: Python • C • C++ • GoLang • Java
packages: PyPICMakeDocker Hub

Youtube intro • Discord chat • Full documentation

         DOI         

Quickstart

Installing UStore is a breeze, and the usage is about as simple as a Python dict.

$ pip install ukv
$ python

from ukv import umem

db = umem.DataBase()
db.main[42] = 'Hi'

We have just create an in-memory embedded transactional database and added one entry in its main collection. Would you prefer that data on disk? Change one line.

from ukv import rocksdb

db = rocksdb.DataBase('/some-folder/')

Would you prefer to connect to a remote UStore server? UStore comes with an Apache Arrow Flight RPC interface!

from ukv import flight_client

db = flight_client.DataBase('grpc://0.0.0.0:38709')

Are you storing NetworkX-like MultiDiGraph? Or Pandas-like DataFrame?

db = rocksdb.DataBase()

users_table = db['users'].table
users_table.merge(pd.DataFrame([
    {'id': 1, 'name': 'Lex', 'lastname': 'Fridman'},
    {'id': 2, 'name': 'Joe', 'lastname': 'Rogan'},
]))

friends_graph = db['friends'].graph
friends_graph.add_edge(1, 2)

assert friends_graph.has_edge(1, 2) and \
    friends_graph.has_node(1) and \
    friends_graph.number_of_edges(1, 2) == 1

Function calls may look identical, but the underlying implementation can be addressing hundreds of terabytes of data placed somewhere in persistent memory on a remote machine.


Is someone else concurrently updating those collections? Bundle your operations to guarantee consistency!

db = rocksdb.DataBase()
with db.transact() as txn:
    txn['users'].table.merge(...)
    txn['friends'].graph.add_edge(1, 2)

So far we have only covered the tip of the UStore. You may use it to...

  1. Get C99, Python, GoLang, or Java wrappers for RocksDB or LevelDB.
  2. Serve them via Apache Arrow Flight RPC to Spark, Kafka, or PyTorch.
  3. Store Document and Graphs in embedded DB, avoiding networking overheads.
  4. Tier DBMS between in-memory and persistent backends under one API.

But UStore can more. Here is the map:


## Basic Usage

UStore is intended not just as database, but as "build your database" toolkit and an open standard for NoSQL potentially-transactional databases, defining zero-copy binary interfaces for "Create, Read, Update, Delete" operations, or CRUD for short.

A few simple C99 headers can link almost any underlying storage engine to numerous high-level language drivers, extending their support for binary string values to graphs, flexible-schema documents, and other modalities, aiming to replace MongoDB, Neo4J, Pinecone, and ElasticSearch with a single ACID-transactional system.

UStore: Small Map

Redis, for example, provides RediSearch, RedisJSON, and RedisGraph with similar objectives. UStore does it better, allowing you to add your favorite Key-Value Stores (KVS), embedded, standalone, or sharded, such as FoundationDB, multiplying its functionality.

Modalities

Blobs

Binary Large Objects can be placed inside UStore. The performance will vastly vary depending on the used underlying technology. The in-memory UCSet will be the fastest, but the least suited for larger objects. The persistent UDisk, when properly configured, can entirely bypass the the Linux kernel, including the filesystem layer, directly addressing block devices.

Binary Processing Performance Chart for UDisk and RocksDB

Modern persistent IO on high-end servers can exceed 100 GB/s per socket when built on user-space drivers like SPDK. This is close to the real-world throughput of high-end RAM and unlocks new, uncommon to databases use cases. One may now put a Gigabyte-sized video file in an ACID-transactional database, right next to its metadata, instead of using a separate object store, like MinIO.

Documents

JSON is the most commonly used document format these days. UStore document collections support JSON, as well as MessagePack, and BSON, used by MongoDB.

Documents Processing Performance Chart for UStore and MongoDB

UStore doesn't scale horizontally yet, but provides much higher single-node performance, and has almost linear vertical scalability on many-core systems thanks to the open-source simdjson and yyjson libraries. Moreover, to interact with data, you don't need a custom query language like MQL. Instead we prioritize open RFC standards to truly avoid vendor locks:

Graphs

Modern Graph databases, like Neo4J, struggle with large workloads. They require too much RAM, and their algorithms observe data one entry at a time. We optimize on both fronts:

  • Using delta-coding to compress inverted indexes.
  • Updating classical graph algorithms for high-latency storage to process graphs in Batch-like or Edge-centric fashion.

Vectors

Feature Stores and Vector Databases, like Pinecone, Milvus, and USearch provide standalone indexes for vector search. UStore implements it as a separate modality, on par with Documents and Graphs. Features:

  • 8-bit integer quantization.
  • 16-bit floating-point quantization.
  • Cosine, Inner Product, and Euclidean metrics.

Drivers

UStore for Python and for C++ look very different. Our Python SDK mimics other Python libraries - Pandas and NetworkX. Similarly, C++ library provides the interface C++ developers expect.

UStore: Frontends

As we know, people use different languages for different purposes. Some C-level functionality isn't implemented for some languages. Either because there was no demand for it, or as we haven't gotten to it yet.

Name Transact Collections Batches Docs Graphs Copies
C99 Standard 0
C++ SDK 0
Python SDK 0-1
GoLang SDK 1
Java SDK 1
Arrow Flight API 0-2

Some frontends here have entire ecosystems around them! Apache Arrow Flight API, for instance, has its own drivers for C, C++, C#, Go, Java, JavaScript, Julia, MATLAB, Python, R, Ruby and Rust.

UStore: Frontends

Frequently Questioned Answers

  • Keys are 64-bit integers, by default. Why?
  • Values are binary strings under 4 GB long. Why?

Frequently Asked Questions

Advanced Usage

Engines

Following engines can be used almost interchangeably. Historically, LevelDB was the first one. RocksDB then improved on functionality and performance. Now it serves as the foundation for half of the DBMS startups.

LevelDB RocksDB UDisk UCSet
Speed 1x 2x 10x 30x
Persistent
Transactional
Block Device Support
Encryption
Watches
Snapshots
Random Sampling
Bulk Enumeration
Named Collections
Open-Source
Compatibility Any Any Linux Any
Maintainer Google Facebook Unum Unum

UCSet and UDisk are both designed and maintained by Unum. Both are feature-complete, but the most crucial feature our alternatives provide is performance. Being fast in memory is easy. The core logic of UCSet can be found in the templated header-only ucset library.

Designing UDisk was a much more challenging 7-year long endeavour. It included inventing new tree-like structures, implementing partial kernel bypass with io_uring, complete bypass with SPDK, CUDA GPU acceleration, and even a custom internal filesystem. UDisk is the first engine to be designed from scratch with parallel architectures and kernel-bypass in mind.

Transactions

Atomicity

Atomicity is always guaranteed. Even on non-transactional writes - either all updates pass or all fail.

Consistency

Consistency is implemented in the strictest possible form - "Strict Serializability" meaning that:

The default behavior, however, can be tweaked at the level of specific operations. For that the ::ustore_option_transaction_dont_watch_k can be passed to ustore_transaction_init() or any transactional read/write operation, to control the consistency checks during staging.

Reads Writes
Head Strict Serial Strict Serial
Transactions over Snapshots Serial Strict Serial
Transactions w/out Snapshots Strict Serial Strict Serial
Transactions w/out Watches Strict Serial Sequential

If this topic is new to you, please check out the Jepsen.io blog on consistency.

Isolation

Reads Writes
Transactions over Snapshots
Transactions w/out Snapshots

Durability

Durability doesn't apply to in-memory systems by definition. In hybrid or persistent systems we prefer to disable it by default. Almost every DBMS that builds on top of KVS prefers to implement its own durability mechanism. Even more so in distributed databases, where three separate Write Ahead Logs may exist:

  • in KVS,
  • in DBMS,
  • in Distributed Consensus implementation.

If you still need durability, flush writes on commits with an optional flag. In the C driver you would call ustore_transaction_commit() with the ::ustore_option_write_flush_k flag.

Containers and Cloud Deployments

The entire DBMS fits into a sub 100 MB Docker image. Run the following script to pull and run the container, exposing Apache Arrow Flight server on the port 38709. Client SDKs will also communicate through that same port, by default.

docker run -d --rm --name ustore-test -p 38709:38709 unum/ustore

The default configuration file can be retrieved with:

cat /var/lib/ustore/config.json

The simplest way to connect and test would be the following command:

python ...

Pre-packaged UStore images are available on multiple platforms:

  • Docker Hub image: v0.7.
  • RedHat OpenShift operator: v0.7.
  • Amazon AWS Marketplace images:
    • Free Community Edition: v0.4.
    • In-Memory Edition: 🔜
    • Performance Edition: 🔜

Don't hesitate to commercialize and redistribute UStore.

Configuration

Tuning databases is as much art as it is science. Projects like RocksDB provide dozens of knobs to optimize the behavior. We allow forwarding specialized configuration files to the underlying engine.

{
    "version": "1.0",
    "directory": "./tmp/"
}

We also have a simpler procedure, which would be enough for 80% of users. That can be extended to utilize multiple devices or directories, or to forward a specialized engine config.

{
    "version": "1.0",
    "directory": "/var/lib/ustore",
    "data_directories": [
        {
            "path": "/dev/nvme0p0/",
            "max_size": "100GB"
        },
        {
            "path": "/dev/nvme1p0/",
            "max_size": "100GB"
        }
    ],
    "engine": {
        "config_file_path": "./engine_rocksdb.ini",
    }
}

Database collections can also be configured with JSON files.

Key Sizes

As of the current version, 64-bit signed integers are used. It allows unique keys in the range from [0, 2^63). 128-bit builds with UUIDs are coming, but variable-length keys are highly discouraged. Why so?

Using variable length keys forces numerous limitations on the design of a Key-Value store. Firstly, it implies slow character-wise comparisons — a performance killer on modern hyperscalar CPUs. Secondly, it forces keys and values to be joined on a disk to minimize the needed metadata for navigation. Lastly, it violates our simple logical view of KVS as a "persistent memory allocator", putting a lot more responsibility on it.


The recommended approach to dealing with string keys is:

  1. Choose a mechanism to generate unique integer keys (UID). Ex: monotonically increasing values.
  2. Use "paths" modality build up a persistent hash map of strings to UIDs.
  3. Use those UIDs to address the rest of the data in binary, document and graph modalities.

This will result in a single conversion point from string to integer representations and will keep most of the system snappy and the C-level interfaces simpler than they could have been.

Value Sizes

We can only address 4 GB values or smaller as of the current now. Why? Key-Value Stores are generally intended for high-frequency operations. Frequently (thousands of times each second), accessing and modifying 4 GB and larger files is impossible on modern hardware. So we stick to smaller length types, making using Apache Arrow representation slightly easier and allowing the KVS to compress indexes better.

Roadmap

Our development roadmap is public and is hosted within the GitHub repository. Upcoming tasks include:

  • Builds for Arm, MacOS.
  • Persistent Snapshots.
  • Continuous Replication.
  • Document-schema validation.
  • Richer drivers for GoLang, Java, JavaScript.
  • Improved Vector Search.
  • Collection-level configuration.
  • Owning and non-owning C++ wrappers.
  • Horizontal Scaling.

Read full roadmap in our docs here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

ustore-0.13.12-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (46.6 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

ustore-0.13.12-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (44.8 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ ARM64

ustore-0.13.12-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (46.6 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

ustore-0.13.12-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (44.8 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ARM64

File details

Details for the file ustore-0.13.12-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ustore-0.13.12-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d2a92990c635cf06c4dfa6f24af05721a38f4574b2461cb607cbf90eb1c00a74
MD5 ff0d5a1145d45316607b8ea02f280f5e
BLAKE2b-256 0f82d79a5e8dae56db2de32711485c76a8bf02c4e03f8221342a3dd85a75cbfe

See more details on using hashes here.

File details

Details for the file ustore-0.13.12-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for ustore-0.13.12-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 d4f10f57e6ae9ecfe9fe5b84206710138c23a65814ac72f3fb986ac607a0a031
MD5 413461934be006b38b791e4b6e057584
BLAKE2b-256 2190c9008bc2408e498ca6b520eb7e44d15dc948d836d1a7c32afce85f8c2db7

See more details on using hashes here.

File details

Details for the file ustore-0.13.12-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ustore-0.13.12-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6b69040b38d7ee126ad3256d3a519e5b5b1272bda3e2dec5dcff29b085ca4640
MD5 58e085beaf6d93024a5f4a44829a355a
BLAKE2b-256 2bf77c2b27468efd907bc86ad3c3525f8c16191ba7f0f555abc0cd286f88628b

See more details on using hashes here.

File details

Details for the file ustore-0.13.12-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for ustore-0.13.12-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 e22ef3a600f95450404fcc090bf026fef50435446e872f46b1e078d34f042cb3
MD5 6d80c85a2ecbecb3765528b4dd2876c2
BLAKE2b-256 79e74d317e63ff3634a6c7a24b3b9d65062bd1f6879dfe5c51e5dbedcd499ca7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page