Skip to main content

High-performance Qlik QVD file reader/writer with Parquet/Arrow/DataFusion support — Rust-powered Python bindings (PyArrow, pandas, Polars)

Project description

qvd

Crates.io PyPI License: MIT

High-performance Rust library for reading, writing and converting Qlik QVD files. With Parquet/Arrow interop, DataFusion SQL, streaming reader, CLI tool, and Python bindings (PyArrow, pandas, Polars).

First and only QVD crate on crates.io.

Features

  • Read/Write QVD — byte-identical roundtrip, zero-copy where possible
  • Parquet ↔ QVD — convert in both directions with compression support (snappy, zstd, gzip, lz4)
  • Arrow RecordBatch — convert QVD to/from Arrow for integration with DataFusion, DuckDB, Polars
  • DataFusion SQL — register QVD files as tables and query them with SQL
  • DuckDB integration — use QVD data in DuckDB via Arrow bridge (Rust and Python)
  • Streaming reader — read QVD files in chunks without loading everything into memory
  • EXISTS() index — O(1) hash lookup, like Qlik's EXISTS() function. Streaming filtered reads — 2.5x faster than Qlik Sense
  • CLI toolqvd-cli convert, inspect, head, schema, filter
  • Python bindings — PyArrow, pandas, Polars support via zero-copy Arrow bridge. 20-35x faster than PyQvd
  • Zero dependencies for core QVD read/write (Parquet/Arrow/DataFusion/Python are optional features)

Performance

Tested on 399 real QVD files (11 KB to 2.8 GB) — all byte-identical roundtrip (MD5 match).

Selected benchmarks:

File Size Rows Columns Read Write
sample_tiny.qvd 11 KB 12 5 0.0s 0.0s
sample_small.qvd 418 KB 2,746 8 0.0s 0.0s
sample_medium.qvd 41 MB 465,810 12 0.5s 0.0s
sample_large.qvd 587 MB 5,458,618 15 6.1s 0.4s
sample_xlarge.qvd 1.7 GB 87,617,047 8 23.6s 1.6s
sample_huge.qvd 2.8 GB 11,907,648 42 24.3s 2.4s

vs PyQvd (Pure Python)

File PyQvd qvd (Rust) Speedup
10 MB, 1.4M rows 5.0s 0.17s 29x
41 MB, 466K rows 8.5s 0.5s 16x
480 MB, 12M rows 79.4s 2.3s 35x
1.7 GB, 87M rows >10 min 29.6s >20x

Streaming EXISTS() filter — vs Qlik Sense

Filtered read with EXISTS() + column selection — 2.5x faster than Qlik Sense.

The streaming reader loads only symbol tables (small, unique values) into memory, then scans the index table in chunks. For each row, only the filter column is decoded first. If the row matches, the selected columns are decoded. Non-matching rows are skipped entirely — no memory allocated.

Benchmark: 1.7 GB QVD, 87.6M rows × 8 columns → filter by 2 values, select 3 columns → 20.4M rows × 3 columns output

Qlik Sense script:

types:
LOAD * INLINE [%Type_ID
7
9];

filtered:
LOAD %Key_ID, DateField_BK, %Type_ID
FROM [lib://data/large_table.qvd](qvd)
WHERE EXISTS(%Type_ID);

STORE filtered INTO [lib://data/result.qvd](qvd);
DROP TABLE filtered;

qvdrs CLI equivalent:

qvd-cli filter large_table.qvd result.qvd \
    --column %Type_ID --values 7,9 \
    --select "%Key_ID,DateField_BK,%Type_ID"
Qlik Sense qvdrs (streaming)
Read + filter ~28s 7.1s
Total (→ QVD) ~28s 11.4s
Total (→ Parquet) 15.5s
Speedup 2.5× (QVD) / 1.8× (Parquet)

Recommendation: For large QVD files, always use read_filtered() (or qvd-cli filter) instead of loading the full file and filtering afterwards. The streaming approach uses dramatically less memory (only matched rows are held) and is significantly faster because non-matching rows are never fully decoded.

Installation

Rust

# Core QVD read/write (zero dependencies)
[dependencies]
qvd = "0.4"

# With Parquet/Arrow support
[dependencies]
qvd = { version = "0.4", features = ["parquet_support"] }

# With DataFusion SQL support
[dependencies]
qvd = { version = "0.4", features = ["datafusion_support"] }

CLI

cargo install qvd --features cli

Python

pip install qvdrs

Or with uv:

uv pip install qvdrs

Quick Start — Rust

Read/Write QVD

use qvd::{read_qvd_file, write_qvd_file};

let table = read_qvd_file("data.qvd")?;
println!("Rows: {}, Cols: {}", table.num_rows(), table.num_cols());

// Byte-identical roundtrip
write_qvd_file(&table, "output.qvd")?;

Convert Parquet ↔ QVD

use qvd::{convert_parquet_to_qvd, convert_qvd_to_parquet, ParquetCompression};

// Parquet → QVD
convert_parquet_to_qvd("input.parquet", "output.qvd")?;

// QVD → Parquet (with zstd compression)
convert_qvd_to_parquet("input.qvd", "output.parquet", ParquetCompression::Zstd)?;

Arrow RecordBatch

use qvd::{read_qvd_file, qvd_to_record_batch, record_batch_to_qvd};

let table = read_qvd_file("data.qvd")?;
let batch = qvd_to_record_batch(&table)?;
// Use with DataFusion, DuckDB, Polars, etc.

// Arrow → QVD
let qvd_table = record_batch_to_qvd(&batch, "my_table")?;

DataFusion SQL (feature datafusion_support)

use datafusion::prelude::*;
use qvd::register_qvd;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let ctx = SessionContext::new();

    // Register QVD file as a table
    register_qvd(&ctx, "sales", "sales.qvd")?;

    // Run SQL queries directly on QVD data
    let df = ctx.sql("SELECT Region, SUM(Amount) as total
                      FROM sales
                      GROUP BY Region
                      ORDER BY total DESC").await?;
    df.show().await?;

    Ok(())
}

You can also register multiple QVD files and JOIN them:

register_qvd(&ctx, "orders", "orders.qvd")?;
register_qvd(&ctx, "customers", "customers.qvd")?;

let df = ctx.sql("SELECT c.Name, COUNT(o.OrderID) as order_count
                   FROM orders o
                   JOIN customers c ON o.CustomerID = c.CustomerID
                   GROUP BY c.Name").await?;

DuckDB via Arrow (Rust)

DuckDB can ingest Arrow RecordBatches directly — no file conversion needed:

use qvd::{read_qvd_file, qvd_to_record_batch};

let table = read_qvd_file("data.qvd")?;
let batch = qvd_to_record_batch(&table)?;

// Pass the Arrow RecordBatch to DuckDB via its Arrow interface
// See: https://docs.rs/duckdb/latest/duckdb/

Streaming Reader

use qvd::open_qvd_stream;

let mut reader = open_qvd_stream("huge_file.qvd")?;
println!("Total rows: {}", reader.total_rows());

while let Some(chunk) = reader.next_chunk(65536)? {
    // Process 65K rows at a time
    println!("Chunk: {} rows starting at {}", chunk.num_rows, chunk.start_row);
}

EXISTS() — O(1) Lookup

Like Qlik's EXISTS() function — build an index of unique values from one table and use it to check or filter another table in O(1) per row.

use qvd::{read_qvd_file, ExistsIndex, filter_rows_by_exists_fast};

// Build index from the "clients" table
let clients = read_qvd_file("clients.qvd")?;
let index = ExistsIndex::from_column(&clients, "ClientID").unwrap();

// O(1) lookup — does this value exist?
assert!(index.exists("12345"));
println!("Unique clients: {}", index.len());

// Filter another table — get row indices where ClientID exists in the clients table
let facts = read_qvd_file("facts.qvd")?;
let col_idx = 0; // index of "ClientID" column in facts table
let matching_rows = filter_rows_by_exists_fast(&facts, col_idx, &index);
println!("Matching rows: {}", matching_rows.len());

Streaming EXISTS() — Filtered Read (recommended for large files)

For large QVD files, use streaming read_filtered() instead of loading everything into memory. Only matching rows are loaded — 2.5x faster than Qlik Sense, uses dramatically less memory.

use qvd::{open_qvd_stream, ExistsIndex, write_qvd_file};

// 1. Build EXISTS index — from another table or from explicit values
let index = ExistsIndex::from_values(&["7", "9"]);

// 2. Open streaming reader (loads only symbol tables, not the full index table)
let mut stream = open_qvd_stream("large_table.qvd")?;

// 3. Stream + filter + select columns — only matching rows loaded into memory
let filtered = stream.read_filtered(
    "%Type_ID",                                     // filter column
    &index,                                         // EXISTS index
    Some(&["%Key_ID", "DateField_BK", "%Type_ID"]), // select columns (None = all)
    65536,                                          // chunk size
)?;
println!("Matched: {} rows x {} cols", filtered.num_rows(), filtered.num_cols());

// 4. Save result
write_qvd_file(&filtered, "output.qvd")?;

You can also build an EXISTS index from another QVD table's column:

let clients = read_qvd_file("clients.qvd")?;
let index = ExistsIndex::from_column(&clients, "ClientID").unwrap();
drop(clients); // free memory before opening the large file

let mut stream = open_qvd_stream("transactions.qvd")?;
let filtered = stream.read_filtered("ClientID", &index, None, 65536)?;

Quick Start — Python

Basic usage

import qvd

# Read QVD
table = qvd.read_qvd("data.qvd")
print(table.columns, table.num_rows)
print(table.head(5))

# Save QVD
table.save("output.qvd")

# Parquet ↔ QVD
qvd.convert_parquet_to_qvd("input.parquet", "output.qvd")
qvd.convert_qvd_to_parquet("input.qvd", "output.parquet", compression="zstd")

# Load Parquet as QvdTable
table = qvd.QvdTable.from_parquet("input.parquet")
table.save("output.qvd")
table.save_as_parquet("output.parquet", compression="snappy")

# EXISTS — O(1) lookup (like Qlik's EXISTS() function)
clients = qvd.read_qvd("clients.qvd")
idx = qvd.ExistsIndex(clients, "ClientID")

# Check if a value exists
print("12345" in idx)           # True/False
print(idx.exists("12345"))      # same thing
print(len(idx))                 # number of unique values

# Check multiple values at once
results = idx.exists_many(["12345", "67890", "99999"])
print(results)  # [True, True, False]

# Filter rows from another table — returns list of matching row indices
facts = qvd.read_qvd("facts.qvd")
matching_rows = qvd.filter_exists(facts, "ClientID", idx)
print(f"Matched {len(matching_rows)} rows out of {facts.num_rows}")

PyArrow

import qvd

# QVD → PyArrow RecordBatch (zero-copy via Arrow C Data Interface)
table = qvd.read_qvd("data.qvd")
batch = table.to_arrow()

# Or directly:
batch = qvd.read_qvd_to_arrow("data.qvd")

# PyArrow → QVD
table = qvd.QvdTable.from_arrow(batch, table_name="my_table")
table.save("output.qvd")

pandas

import qvd

# QVD → pandas DataFrame (via Arrow, zero-copy where possible)
df = qvd.read_qvd("data.qvd").to_pandas()

# Or directly:
df = qvd.read_qvd_to_pandas("data.qvd")

# pandas → QVD (via PyArrow round-trip)
import pyarrow as pa
batch = pa.RecordBatch.from_pandas(df)
table = qvd.QvdTable.from_arrow(batch, table_name="my_table")
table.save("output.qvd")

Polars

import qvd

# QVD → Polars DataFrame
df = qvd.read_qvd("data.qvd").to_polars()

# Or directly:
df = qvd.read_qvd_to_polars("data.qvd")

# Polars → QVD (via PyArrow round-trip)
batch = df.to_arrow()
table = qvd.QvdTable.from_arrow(batch, table_name="my_table")
table.save("output.qvd")

DuckDB (Python)

import qvd
import duckdb

# QVD → DuckDB (via Arrow, zero-copy)
batch = qvd.read_qvd_to_arrow("data.qvd")
result = duckdb.sql("SELECT * FROM batch WHERE amount > 100")

# Or query multiple QVD files:
sales = qvd.read_qvd_to_arrow("sales.qvd")
customers = qvd.read_qvd_to_arrow("customers.qvd")
result = duckdb.sql("""
    SELECT c.Name, SUM(s.Amount) as total
    FROM sales s
    JOIN customers c ON s.CustomerID = c.CustomerID
    GROUP BY c.Name
""")

CLI

Install:

cargo install qvd --features cli

Convert between formats

# Parquet → QVD
qvd-cli convert input.parquet output.qvd

# QVD → Parquet (default compression: snappy)
qvd-cli convert input.qvd output.parquet

# QVD → Parquet with specific compression
qvd-cli convert input.qvd output.parquet --compression zstd
qvd-cli convert input.qvd output.parquet --compression gzip
qvd-cli convert input.qvd output.parquet --compression lz4
qvd-cli convert input.qvd output.parquet --compression none

# Rewrite QVD (re-generate from internal representation)
qvd-cli convert input.qvd output.qvd

# Recompress Parquet
qvd-cli convert input.parquet output.parquet --compression zstd

Inspect QVD metadata

qvd-cli inspect data.qvd

Output example:

File:       data.qvd
Size:       41.3 MB
Table:      SalesData
Rows:       465,810
Columns:    12
Created:    2024-01-15 10:30:00
Build:      14.0
RecordSize: 89 bytes
Read time:  0.50s

Column                         Symbols BitWidth   Bias FmtType  Tags
--------------------------------------------------------------------------------
OrderID                         465810        20      0      0  $numeric, $integer
CustomerID                       12500        14      0      0  $numeric, $integer
Region                               5         3      0      0  $text
Amount                          389201        19      0      2  $numeric

Preview rows

# Show first 10 rows (default)
qvd-cli head data.qvd

# Show first 50 rows
qvd-cli head data.qvd --rows 50

Filter rows with EXISTS() (streaming)

# Filter by column value(s) — streaming, memory-efficient
qvd-cli filter large.qvd output.qvd --column %Type_ID --values 7,9

# Filter + select only specific columns
qvd-cli filter large.qvd output.qvd --column %Type_ID --values 7,9 \
    --select "%Key_ID,DateField_BK,%Type_ID"

# Filter and save as Parquet
qvd-cli filter large.qvd output.parquet --column %Type_ID --values 7,9 \
    --select "%Key_ID,DateField_BK,%Type_ID" --compression zstd

Show Arrow schema

qvd-cli schema data.qvd

Output example:

Arrow Schema for 'data.qvd':

  OrderID                        Int64
  CustomerID                     Int64
  Region                         Utf8
  Amount                         Float64 (nullable)
  OrderDate                      Date32

Architecture

src/
├── lib.rs          — public API, re-exports
├── error.rs        — error types (QvdError, QvdResult)
├── header.rs       — XML header parser/writer (custom, zero-dep)
├── value.rs        — QVD data types (QvdSymbol, QvdValue)
├── symbol.rs       — symbol table binary reader/writer
├── index.rs        — index table bit-stuffing reader/writer
├── reader.rs       — high-level QVD reader
├── writer.rs       — high-level QVD writer + QvdTableBuilder
├── exists.rs       — ExistsIndex with HashSet + filter functions
├── streaming.rs    — streaming chunk-based QVD reader
├── parquet.rs      — Parquet/Arrow ↔ QVD conversion (optional)
├── datafusion.rs   — DataFusion TableProvider for SQL on QVD (optional)
├── python.rs       — PyO3 bindings with PyArrow/pandas/Polars (optional)
└── bin/qvd.rs      — CLI binary (optional)

Feature Flags

Feature Dependencies Description
(default) none Core QVD read/write
parquet_support arrow, parquet, chrono Parquet/Arrow conversion
datafusion_support + datafusion, tokio SQL queries on QVD via DataFusion
cli + clap CLI binary
python + pyo3, arrow/pyarrow Python bindings with PyArrow/pandas/Polars

Author

Stanislav Chernov (@bintocher)

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qvdrs-0.4.1.tar.gz (73.9 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

qvdrs-0.4.1-cp313-cp313-win_amd64.whl (3.2 MB view details)

Uploaded CPython 3.13Windows x86-64

qvdrs-0.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.1 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

qvdrs-0.4.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (3.9 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ ARM64

qvdrs-0.4.1-cp313-cp313-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

qvdrs-0.4.1-cp313-cp313-macosx_10_12_x86_64.whl (3.6 MB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

File details

Details for the file qvdrs-0.4.1.tar.gz.

File metadata

  • Download URL: qvdrs-0.4.1.tar.gz
  • Upload date:
  • Size: 73.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for qvdrs-0.4.1.tar.gz
Algorithm Hash digest
SHA256 9547561fe8bd4fb4bb8b14a54e0bf1abff23b7bbd33761b8ac6515390163dedf
MD5 e85e4d0e57662850802fc504cee0b3a9
BLAKE2b-256 a039aafba4e40731aec81cbfcc1d4eea88041cb9da9fc28d382fafdacd81951a

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.1-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: qvdrs-0.4.1-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for qvdrs-0.4.1-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 9eab5d3e431c628fe4d72c995738c9a50771ca7c175b8f289a9d2d15933786dc
MD5 c4e03785e39c29033156c12e887b0b11
BLAKE2b-256 5c11d053329319aab8f5d2a378e97942f1e93c5a526a75192c329e37bc84bb66

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 80cafb73e6a48598adffe1854708c364b4a61e96b96f937c3f28026342639e8e
MD5 e247571f58126804fc1ce317d6bf7fe5
BLAKE2b-256 623ff1d4a6f01995a20338f08eaf43f1ef90c240797fe5bbb18e94f12c3829fa

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 a05382fae76a171b4fc60640b3de00caa8237560818fed77f5a1fbcb24fdd300
MD5 bc534502be958681fa8f55b2059316fc
BLAKE2b-256 c8fb2e465fd6bdb41ab3693a52c07d066208ac028f435125c17d0db1906fb51c

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.1-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.1-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 d8dcf6e46bf1fa1524a29dde639ff3df715a10ca45fd2419868065ac8deef688
MD5 fabc3f650b85b2d21ee95a0d48d79ca7
BLAKE2b-256 655595dc743502d2cfd14f17c7049207681af7f046b079545a50ac225bf33a83

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.1-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.1-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 5b268e9995e90426f767a84c50d7ecf3e38c9718bab106796a437581cf0cac13
MD5 c62414e67b7bfaf07fb4e9ee46498932
BLAKE2b-256 79283cb931631f637272b6cbe205fe6c5403b774aab11a7030bad9c102a5803b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page