Skip to main content

Python bindings for Unum's Universal Key-Value store.

Project description

UKV

Universal Binary Interface
For the Fastest Databases in Existence
From Unum with ❤️

RocksDB • LevelDB • UDisk • UMem
Blobs • Documents • Graphs • Texts • Vectors
C • C++ • Python • Java • GoLang • Apache Arrow

           

What is UKV?

UKV is an open C-layer binary standard for "Create, Read, Update, Delete" operations, or CRUD for short.

Many databases exist today, providing similar functionality and performance under different interfaces. It is a problem for DBMS users, introducing vendor locks and additional friction for adopting newer and better technologies.

If databases do similar things, let's standardize the interface and compete for the best implementation. That way, migrations are more manageable, and the system can be modularized into parts, giving users absolute flexibility in choosing between different:

  • Forms of data, or Modalities,
  • Key-Value Store implementations, or Engines,
  • Distribution forms, or Packaging,
  • Client SDKs, or Frontends for higher-level languages.

The BLAS of CRUD

Generic standards have existed in computing since 1979. The best one is called BLAS and was the locomotive of Numerical Methods across all disciplines in the past 50 years. Every deep-learning you use relies on BLAS. What is the standard that your DBMS can be built around?

UKV: Small Map

We have yet to pass the test of time, like BLAS, but we can surpass them in modularity and provide a better reference implementation. Today, Intel, Nvidia, AMD, GraphCore, Cerebras, and many others ship optimized implementations of BLAS for their hardware. Similarly, we ship proprietary heavily-tested and extensively-benchmarked implementations of UKV to our customers, but even the provided FOSS reference design aims to be better than whatever OLTP DBMS you are using today.

Why not use LevelDB or RocksDB interface directly?


Features

We love to think about UKV as a "Transactional 1 Zoned 2 Memory Allocator 3" that can abstract both persistent and volatile memory. This abstraction every DBMS needs. But UKV has more to offer!

  • ACID Transactions across collections
  • Persistent Snapshots
  • Operation-level WATCH-es
  • BSON, JSON, MessagePack documents support
  • JSON Patches & Merge-Patches
  • JSON Pointers Addressing
  • Native Apache Arrow format support
  • Apache Arrow Flight server implementation
  • Bulk Scans, Random Samping
  • Pandas Tabular interface
  • NetworkX Graph interface
  • PyTorch & TensorFlow Data-Loaders

1: For ACID. 2: For having named collections. 3: Because keys are integers, like uintptr_t, and values are virtually continuous memory blocks of variable length.

HTAP: OLAP + OLTP

Thanks to modular architecture, you can take the pieces you need. The most apparent combinations would cover the most immediate use cases like:

  1. Getting a Python, GoLang, or Java wrapper for vanilla RocksDB or LevelDB.
  2. Serving them via Apache Arrow Flight RPC to Spark, Kafka, or PyTorch.
  3. Embedded Document and GraphDB that will avoid networking overheads.
  4. Tiering DBMS deployment between UMem and persistent memory.

But that is just the peak of the iceberg. Let's dive under the surface of our data lake.

🐦🐦 + 🪨 → ☠️☠️

It is normal these days to have hundreds of Databases for one project. At least one for every kind of workload. So if you are building an online marketplace, it may look like this:

Postgres for account details and payments MongoDB for products descriptions and reviews
Neo4J to link related products AWS S3 to store product pictures and videos

When the data is updated, you have to apply changes across all those instances, manually rolling them back if one of the parts fails. Every system has a different API, different guarantees, and runtime constraints. Already sounds like too many wasted engineering hours.

UKV: HTAP

As it is not one store, different teams work on different versions of data. Some of those datasets can quickly become irrelevant, especially in:

  • Fraud Detection, where parties constantly adapt,
  • Recommender Systems, if new products and content appear every day,
  • Real-Time Pricing, where market conditions dictate the cost of services.

By the time row-major OLTP data is exported into the column-major OLAP store, it might be too late. Every data scientist knows - "Garbage In, Means Garbage Out". Outdated input will give you an obsolete result, and the business will lose money.

But if you have just 1 Hybrid Store, the pain is gone. And the engineering teams can spend time doing something productive rather than packaging and versioning endless Parquet files around your system.

One Data Lake to Serve Them All


Backend

A backend is a composition of just 2-3 parts. An Engine, being a key-value store for the serialized representation. An implementation of Modalities, being various serialization and indexing approaches for structured data. And a Distribution form, such as the implementation of some web-protocol for communication with the outside world.

UKV: Backend

Our horizontal scaling capabilities aren't ideal yet. Sharding graphs and vector collections is hard. We can do it at least as bad as the others are doing it, but we don't want to stop there. More news to come.

Our vertical scaling, however, is the best in the industry. Even with just a single node, in a 2U chassis in 2022, we can quickly get 24x 16 TB of NVMe storage connected to 2x CPU sockets, totaling 384 TB of space, capable of yielding ~120 GB/s of read throughput, out of which, ~70 GB/s our in-house engine can already sustain. With NVMe-oF, this can scale horizontally to Petabytes of low-latency storage served through 200 GBit/s Mellanox Infiniband fibers! Combining it with the numerous features above and GPU acceleration, one can get an all-one Data Lake with the feel of Pandas, Rapids speed, Hadoop scale, and Postgres consistency.

Engines

Following engines can be used almost interchangeably. Historically, LevelDB was the first one. RocksDB then improved on functionality and performance. Now it serves as the foundation for half of the DBMS startups.

LevelDB RocksDB UDisk UMem
Speed 1x 2x 10x 30x
Persistent
Transactional
Block Device Support
Encryption
Watches
Snapshots
Random Sampling
Bulk Enumeration
Named Collections
Open-Source
Compatibility Any Any Linux Any
Maintainer Google Facebook Unum Unum

UMem and UDisk are both designed and maintained by Unum. Both are feature-complete, but the most crucial feature our infrastructure provides is performance. Being fast in memory is easy. The core logic of UMem can be found in the templated header-only consistent_set library.

UCSB 10 TB Results

Designing UDisk was a much more challenging 7-year long endeavour. It included inventing new tree-like structures, implementing partial kernel bypass with io_uring, complete bypass with SPDK, GPU acceleration, and even a custom internal filesystem. UDisk is the first engine to be designed from scratch with parallel architectures and kernel-bypass in mind.

Jump to Benchmarks.

Modalities

The same DBMS can contain multiple collections. Each collection can store BLOBs or any modality of structured data. Data of different modalities can't be stored in the same collection. ACID transactions across modalities are supported.

Documents Graphs Vectors
Values JSON-like Hierarchical Objects Labeled Directed Relations High-Dimensional Embeddings
Specialized Functionality JSON ⇔ BSON ⇔ MessagePack, Sub-Document Operations Gather Neighbors, Count Vertex Degrees Quantization, K-Approximate Nearest-Neighbors Search
Examples MongoDB, Postgres, MySQL Neo4J, TigerGraph Elastic Search, Pinecone

One of our core objectives was to select the minimal core set of functions for each modality. In that case, implementing them can be easy for any passionate developer. If the low-level interfaces are flexible, making the high-level interfaces rich is easy.

Frontend: SDK ∨ API

UKV for Python and for C++ look very different. Our Python SDK mimics other Python libraries - Pandas and NetworkX. Similarly, C++ library provides the interface C++ developers expect.

UKV: Frontends

As we know people use different languages for different purposes. Some C-level functionality isn't implemented for some languages. Either because there was no demand for it, or as we haven't gotten to it yet.

Name Transact Collections Batches Docs Graphs Copies
C Standard 0
C++ SDK 0
Python SDK 0-1
GoLang SDK 1
Java SDK 1
Arrow Flight API 1-2

Some frontends here have entire ecosystems around them! Apache Arrow Flight API, for instance, has its own bindings for C, C++, C#, Go, Java, JavaScript, Julia, MATLAB, Python, R, Ruby and Rust.

UKV: Frontends


Documentation

For guidance on installation, development, deployment, and administration, see our documentation.

Installation

The entire DBMS fits into a sub 100 MB Docker image. Run the following script to pull and run the container, exposing Apache Arrow Flight API server on the port 38709. Client SDKs will also communicate through that same port.

docker run --rm --name TestUKV -p 38709:38709 unum/ukv

For C/C++ clients and for the embedded distribution of UKV, CMake is the default form of installation. It may require installing Arrow separately.

FetchContent_Declare(
    ukv
    GIT_REPOSITORY https://github.com/unum-cloud/UKV.git
    GIT_SHALLOW TRUE
)
FetchContent_MakeAvailable(ukv)
include_directories(${ukv_SOURCE_DIR}/include)

After that, you only need to choose linking target, such as ukv_rocksdb, ukv_umem, ukv_flight_client, or something else. For Conan users a shorter alternative for C/C++ is available:

conan install ukv

For Python users, it is the classical:

pip install ukv

Read full installation guide in our docs here.

Getting Started

Testing

We split tests into 4 categories:

  1. Compilation: Validate meta-programming.
  2. API: Prevent passing incompatible function arguments.
  3. Unit: Short and cover most of the functionality.
  4. Stress: Very long and multithreaded.

All unit tests are packed into a single executable to simplify running it during development. Every backend produces one such executable. The in-memory embedded variant is generally used for debugging any non-engine level logic.

The stress tests, on the other hand, can run for days and simulate millions of concurrent transactions, ensuring the data remains intact. Any additions, especially to the stress tests, will be highly welcomed!

Read full testing guide in our docs here.

Benchmarks

It is always best to implement an application-specific benchmark, as every use case is different. Still, for the binary layer logic, we have built a dedicated project to evaluate persistent data structures - UCSB. It doesn't depend on UKV and uses native interfaces of all the engines to put everyone in the same shoes.

All engines were benchmarked for weeks using UCSB. We have already published the results for BLOB-layer abstractions for 10 TB, and, previously, 1 TB collections.

For more advanced modality-specific workloads, we have the following benchmarks provided in this repo:

  • Twitter. It takes the .ndjson dump of their GET statuses/sample API and imports it into the Documents collection. We then measure random-gathers' speed at document-level, field-level, and multi-field tabular exports. We also construct a graph from the same data in a separate collection. And evaluate Graph construction time and traversals from random starting points.
  • Tabular. Similar to the previous benchmark, but generalizes it to arbitrary datasets with some additional context. It supports Parquet and CSV input files.
  • Vector. Given a memory-mapped file with a big matrix, builds an Approximate Nearest Neighbors Search index from the rows of that matrix. Evaluates both construction and query time.

We are working hard to prepare a comprehensive overview of different parts of UKV compared to industry-standard tools. On both our hardware and most common instances across public clouds.

Read full benchmarking guide in our docs here.

Tooling

Tools are built on top of the UKV interface and aren't familiar with the underlying backend implementation. They are meant to simplify DevOps and DBMS management. Following tools are currently in the works.

  • Bulk dataset imports and exports for industry-standard Parquet, NDJSON and CSV files.
  • Rolling backups and replication.
  • Visualization tools and dashboards.

Read full tooling guide in our docs here.

Development & Licensing

This is an umbrella project for a number of FOSS libraries. Sometimes a couple of libraries can be used together for the same task to complement each other functionality. For example, simdjson is used for document validation and sampling, while yyjson is used for updates.

A full list of dependencies is available at the link below, together with detailed descriptions of reference implementations. Licensing depends on which parts you are using.

Read full development and contribution guide in our docs here.

Presets, Limitations and FAQ

  • Keys are 64-bit integers, by default. Why?
  • Values are binary strings under 4 GB long. Why?
  • Transactions are ACI(D) by-default. What does it mean?
  • Why not use LevelDB or RocksDB interface? Answered
  • Why not use SQL, MQL or Cypher? Answered

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ukv-0.3.0.1670765169560169952-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28.8 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

File details

Details for the file ukv-0.3.0.1670765169560169952-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ukv-0.3.0.1670765169560169952-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f170a4d40854a691fc614c892a8800435042b96890794bd9889233fcbcbf0eb2
MD5 f70dd563f79fe8f077e5d5f314214272
BLAKE2b-256 10401a9adc7bf81b9dda7db61d847a4a65385e3577203ec5e5a1d7319b139c8c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page