Python bindings for Unum's Universal Key-Value store.
Project description
UKV
Universal Binary Interface
For the Fastest Databases in Existence
From Unum with ❤️
RocksDB • LevelDB • UDisk • UMem
Blobs • Documents • Graphs • Texts • Vectors
C • C++ • Python • Java • GoLang • Apache Arrow
What is UKV?
UKV is an open C-layer binary standard for "Create, Read, Update, Delete" operations, or CRUD for short.
Many databases exist today, providing similar functionality and performance under different interfaces. It is a problem for DBMS users, introducing vendor locks and additional friction for adopting newer and better technologies.
If databases do similar things, let's standardize the interface and compete for the best implementation. That way, migrations are more manageable, and the system can be modularized into parts, giving users absolute flexibility in choosing between different:
- Forms of data, or Modalities,
- Key-Value Store implementations, or Engines,
- Distribution forms, or Packaging,
- Client SDKs, or Frontends for higher-level languages.
The BLAS of CRUD
Generic standards have existed in computing since 1979. The best one is called BLAS and was the locomotive of Numerical Methods across all disciplines in the past 50 years. Every deep-learning you use relies on BLAS. What is the standard that your DBMS can be built around?
We have yet to pass the test of time, like BLAS, but we can surpass them in modularity and provide a better reference implementation. Today, Intel, Nvidia, AMD, GraphCore, Cerebras, and many others ship optimized implementations of BLAS for their hardware. Similarly, we ship proprietary heavily-tested and extensively-benchmarked implementations of UKV to our customers. Still, we want the Community Edition to be a major improvement over modern OLTP DBMS solutions of today.
Features
We love to think about UKV as a "Transactional 1 Zoned 2 Memory Allocator 3" that can abstract both persistent and volatile memory. This abstraction every DBMS needs. But UKV has more to offer!
|
|
1: For ACID. 2: For having named collections. 3: Because keys are integers, like
uintptr_t
, and values are virtually continuous memory blocks of variable length.
HTAP: OLAP + OLTP
Thanks to modular architecture, you can take the pieces you need. The most apparent combinations would cover the most immediate use cases like:
- Getting a Python, GoLang, or Java wrapper for vanilla RocksDB or LevelDB.
- Serving them via Apache Arrow Flight RPC to Spark, Kafka, or PyTorch.
- Embedded Document and GraphDB that will avoid networking overheads.
- Tiering DBMS deployment between UMem and persistent memory.
But that is just the peak of the iceberg. Let's dive under the surface of our data lake.
🐦🐦 + 🪨 → ☠️☠️
It is normal these days to have hundreds of Databases for one project. At least one for every kind of workload. So if you are building an online marketplace, it may look like this:
Postgres for account details and payments | MongoDB for products descriptions and reviews |
Neo4J to link related products | AWS S3 to store product pictures and videos |
ClickHouse for activity logs | Redis for real-time sessions state |
When the data is updated, you have to apply changes across all those instances, manually rolling them back if one of the parts fails. Every system has a different API, different guarantees, and runtime constraints. For us it sounded like too many wasted engineering hours.
... So we have spent years fixing that 😅
As it is not one store, different teams work on different versions of data. Some of those datasets can quickly become irrelevant, especially in:
- Fraud Detection, where parties constantly adapt,
- Recommender Systems, if new products and content appear every day,
- Real-Time Pricing, where market conditions dictate the cost of services.
By the time row-major OLTP data is exported into the column-major OLAP store, it might be too late. Every data scientist knows - "Garbage In, Means Garbage Out". Outdated input will give you an obsolete result, and the business will lose money.
But if you have just 1 Hybrid Store, the pain is gone. And the engineering teams can spend time doing something productive rather than packaging and versioning endless Parquet files around your system.
One Data Lake to Serve Them All
Backend
A backend is a composition of just 2-3 parts. An Engine, being a key-value store for the serialized representation. An implementation of Modalities, being various serialization and indexing approaches for structured data. And a Distribution form, such as the implementation of some web-protocol for communication with the outside world.
Our horizontal scaling capabilities aren't ideal yet. Sharding graphs and vector collections is hard. We can do it at least as bad as the others are doing it, but we don't want to stop there. More news to come.
Our vertical scaling, however, is the best in the industry. Even with just a single node, in a 2U chassis in 2022, we can quickly get 24x 16 TB of NVMe storage connected to 2x CPU sockets, totaling 384 TB of space, capable of yielding ~120 GB/s of read throughput, out of which, ~70 GB/s our in-house engine can already sustain. With NVMe-oF, this can scale horizontally to Petabytes of low-latency storage served through 200 GBit/s Mellanox InfiniBand fibers! Combining it with the numerous features above and GPU acceleration, one can get an all-one Data Lake with the feel of Pandas, Rapids speed, Hadoop scale, and Postgres consistency.
Engines
Following engines can be used almost interchangeably. Historically, LevelDB was the first one. RocksDB then improved on functionality and performance. Now it serves as the foundation for half of the DBMS startups.
LevelDB | RocksDB | UDisk | UMem | |
---|---|---|---|---|
Speed | 1x | 2x | 10x | 30x |
Persistent | ✓ | ✓ | ✓ | ✗ |
Transactional | ✗ | ✓ | ✓ | ✓ |
Block Device Support | ✗ | ✗ | ✓ | ✗ |
Encryption | ✗ | ✗ | ✓ | ✗ |
Watches | ✗ | ✓ | ✓ | ✓ |
Snapshots | ✓ | ✓ | ✓ | ✗ |
Random Sampling | ✗ | ✗ | ✓ | ✓ |
Bulk Enumeration | ✗ | ✗ | ✓ | ✓ |
Named Collections | ✗ | ✓ | ✓ | ✓ |
Open-Source | ✓ | ✓ | ✗ | ✓ |
Compatibility | Any | Any | Linux | Any |
Maintainer | Unum | Unum |
UMem and UDisk are both designed and maintained by Unum.
Both are feature-complete, but the most crucial feature our infrastructure provides is performance.
Being fast in memory is easy.
The core logic of UMem can be found in the templated header-only consistent_set
library.
Designing UDisk was a much more challenging 7-year long endeavour.
It included inventing new tree-like structures, implementing partial kernel bypass with io_uring
, complete bypass with SPDK
, GPU acceleration, and even a custom internal filesystem.
UDisk is the first engine to be designed from scratch with parallel architectures and kernel-bypass in mind.
Modalities
The same DBMS can contain multiple collections. Each collection can store BLOBs or any modality of structured data. Data of different modalities can't be stored in the same collection. ACID transactions across modalities are supported.
Documents | Graphs | Vectors | |
---|---|---|---|
Values | JSON-like Hierarchical Objects | Labeled Directed Relations | High-Dimensional Embeddings |
Specialized Functionality | JSON ⇔ BSON ⇔ MessagePack, Sub-Document Operations | Gather Neighbors, Count Vertex Degrees | Quantization, K-Approximate Nearest-Neighbors Search |
Examples | MongoDB, Postgres, MySQL | Neo4J, TigerGraph | Elastic Search, Pinecone |
One of our core objectives was to select the minimal core set of functions for each modality. In that case, implementing them can be easy for any passionate developer. If the low-level interfaces are flexible, making the high-level interfaces rich is easy.
Frontend: SDK ∨ API
UKV for Python and for C++ look very different. Our Python SDK mimics other Python libraries - Pandas and NetworkX. Similarly, C++ library provides the interface C++ developers expect.
As we know people use different languages for different purposes. Some C-level functionality isn't implemented for some languages. Either because there was no demand for it, or as we haven't gotten to it yet.
Name | Transact | Collections | Batches | Docs | Graphs | Copies |
---|---|---|---|---|---|---|
C Standard | ✓ | ✓ | ✓ | ✓ | ✓ | 0 |
C++ SDK | ✓ | ✓ | ✓ | ✓ | ✓ | 0 |
Python SDK | ✓ | ✓ | ✓ | ✓ | ✓ | 0-1 |
GoLang SDK | ✓ | ✓ | ✓ | ✗ | ✗ | 1 |
Java SDK | ✓ | ✓ | ✗ | ✗ | ✗ | 1 |
Arrow Flight API | ✓ | ✓ | ✓ | ✓ | ✓ | 1-2 |
Some frontends here have entire ecosystems around them! Apache Arrow Flight API, for instance, has its own bindings for C, C++, C#, Go, Java, JavaScript, Julia, MATLAB, Python, R, Ruby and Rust.
Documentation
For guidance on installation, development, deployment, and administration, see our documentation.
Installation
The entire DBMS fits into a sub 100 MB Docker image.
Run the following script to pull and run the container, exposing Apache Arrow Flight API server on the port 38709
.
Client SDKs will also communicate through that same port, by default.
docker run --rm --name TestUKV -p 38709:38709 unum/ukv
For C/C++ clients and for the embedded distribution of UKV, CMake is the default form of installation. It may require installing Arrow separately.
FetchContent_Declare(
ukv
GIT_REPOSITORY https://github.com/unum-cloud/UKV.git
GIT_SHALLOW TRUE
)
FetchContent_MakeAvailable(ukv)
include_directories(${ukv_SOURCE_DIR}/include)
After that, you only need to choose linking target, such as ukv_rocksdb
, ukv_umem
, ukv_flight_client
, or something else.
For Python users, it is the classical:
pip install ukv
Which will bring all the libraries packed into a single wheel: ukv.umem
, ukv.rocksdb
, ukv.leveldb
, ukv.flight_client
.
Getting Started
- Using the C Standard Directly
- Most Flexible!
- Most Performant!
- Comparatively verbose.
- Using C++ SDK
- Using Python SDK
- Using Java SDK
- Using GoLang SDK
- Using Arrow Flight API
Testing
We split tests into 4 categories:
- Compilation: Validate meta-programming.
- API: Prevent passing incompatible function arguments.
- Unit: Short and cover most of the functionality.
- Stress: Very long and multithreaded.
All unit tests are packed into a single executable to simplify running it during development. Every backend produces one such executable. The in-memory embedded variant is generally used for debugging any non-engine level logic.
The stress tests, on the other hand, can run for days and simulate millions of concurrent transactions, ensuring the data remains intact. Any additions, especially to the stress tests, will be highly welcomed!
Benchmarks
It is always best to implement an application-specific benchmark, as every use case is different. Still, for the binary layer logic, we have built a dedicated project to evaluate persistent data structures - UCSB. It doesn't depend on UKV and uses native interfaces of all the engines to put everyone in the same shoes.
All engines were benchmarked for weeks using UCSB. We have already published the results for BLOB-layer abstractions for 10 TB, and, previously, 1 TB collections.
For more advanced modality-specific workloads, we have the following benchmarks provided in this repo:
- Twitter. It takes the
.ndjson
dump of theirGET statuses/sample API
and imports it into the Documents collection. We then measure random-gathers' speed at document-level, field-level, and multi-field tabular exports. We also construct a graph from the same data in a separate collection. And evaluate Graph construction time and traversals from random starting points. - Tabular. Similar to the previous benchmark, but generalizes it to arbitrary datasets with some additional context. It supports Parquet and CSV input files.
- Vector. Given a memory-mapped file with a big matrix, builds an Approximate Nearest Neighbors Search index from the rows of that matrix. Evaluates both construction and query time.
We are working hard to prepare a comprehensive overview of different parts of UKV compared to industry-standard tools. On both our hardware and most common instances across public clouds.
Tooling
Tools are built on top of the UKV interface and aren't familiar with the underlying backend implementation. They are meant to simplify DevOps and DBMS management. Following tools are currently in the works.
- Bulk dataset imports and exports for industry-standard Parquet, NDJSON and CSV files.
- Rolling backups and replication.
- Visualization tools and dashboards.
Development & Licensing
This is an umbrella project for a number of FOSS libraries.
Sometimes a couple of libraries can be used together for the same task to complement each other functionality.
For example, simdjson
is used for document validation and sampling, while yyjson
is used for updates.
A full list of dependencies is available at the link below, together with detailed descriptions of reference implementations. Licensing depends on which parts you are using.
Read full development and contribution guide in our docs here.
Presets, Limitations and FAQ
- Keys are 64-bit integers, by default. Why?
- Values are binary strings under 4 GB long. Why?
- Transactions are ACI(D) by-default. What does it mean?
- Why not use LevelDB or RocksDB interface? Answered
- Why not use SQL, MQL or Cypher? Answered
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file ukv-0.3.0.1670848302842692794-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: ukv-0.3.0.1670848302842692794-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 28.8 MB
- Tags: CPython 3.10, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6d866a1a1eb585fb76bf1da7bae2eb1c578a339f77fc1d14c661072f34447a02 |
|
MD5 | a899a02ad5818e8e0086d963acd546f7 |
|
BLAKE2b-256 | 4acb0c7158db0f0e0777fe11f22f2e5e289b591a4b6234acd50f8ca554307777 |