Skip to main content

High-performance Qlik QVD file reader/writer with Parquet/Arrow/DataFusion support — Rust-powered Python bindings (PyArrow, pandas, Polars)

Project description

qvd

Crates.io PyPI License: MIT

High-performance Rust library for reading, writing and converting Qlik QVD files. With Parquet/Arrow interop, DataFusion SQL, streaming reader, CLI tool, and Python bindings (PyArrow, pandas, Polars).

First and only QVD crate on crates.io.

Features

  • Read/Write QVD — byte-identical roundtrip, zero-copy where possible
  • Parquet ↔ QVD — convert in both directions with compression support (snappy, zstd, gzip, lz4)
  • Arrow RecordBatch — convert QVD to/from Arrow for integration with DataFusion, DuckDB, Polars
  • DataFusion SQL — register QVD files as tables and query them with SQL
  • DuckDB integration — use QVD data in DuckDB via Arrow bridge (Rust and Python)
  • Streaming reader — read QVD files in chunks without loading everything into memory
  • EXISTS() index — O(1) hash lookup, like Qlik's EXISTS() function. Streaming filtered reads — 2.5x faster than Qlik Sense
  • CLI toolqvd-cli convert, inspect, head, schema, filter
  • Python bindings — PyArrow, pandas, Polars support via zero-copy Arrow bridge
  • Zero dependencies for core QVD read/write (Parquet/Arrow/DataFusion/Python are optional features)

Performance

Tested on 399 real QVD files (11 KB to 2.8 GB) — all byte-identical roundtrip (MD5 match).

Selected benchmarks:

File Size Rows Columns Read Write
sample_tiny.qvd 11 KB 12 5 0.0s 0.0s
sample_small.qvd 418 KB 2,746 8 0.0s 0.0s
sample_medium.qvd 41 MB 465,810 12 0.5s 0.0s
sample_large.qvd 587 MB 5,458,618 15 6.1s 0.4s
sample_xlarge.qvd 1.7 GB 87,617,047 8 23.6s 1.6s
sample_huge.qvd 2.8 GB 11,907,648 42 24.3s 2.4s

Streaming EXISTS() filter — vs Qlik Sense

Filtered read with EXISTS() + column selection — 2.5x faster than Qlik Sense.

The streaming reader loads only symbol tables (small, unique values) into memory, then scans the index table in chunks. For each row, only the filter column is decoded first. If the row matches, the selected columns are decoded. Non-matching rows are skipped entirely — no memory allocated.

Benchmark: 1.7 GB QVD, 87.6M rows × 8 columns → filter by 2 values, select 3 columns → 20.4M rows × 3 columns output

Qlik Sense script:

types:
LOAD * INLINE [%Type_ID
7
9];

filtered:
LOAD %Key_ID, DateField_BK, %Type_ID
FROM [lib://data/large_table.qvd](qvd)
WHERE EXISTS(%Type_ID);

STORE filtered INTO [lib://data/result.qvd](qvd);
DROP TABLE filtered;

qvdrs CLI equivalent:

qvd-cli filter large_table.qvd result.qvd \
    --column %Type_ID --values 7,9 \
    --select "%Key_ID,DateField_BK,%Type_ID"
Qlik Sense qvdrs (streaming)
Read + filter ~28s 7.1s
Total (→ QVD) ~28s 11.4s
Total (→ Parquet) 15.5s
Speedup 2.5× (QVD) / 1.8× (Parquet)

Recommendation: For large QVD files, always use read_filtered() (or qvd-cli filter) instead of loading the full file and filtering afterwards. The streaming approach uses dramatically less memory (only matched rows are held) and is significantly faster because non-matching rows are never fully decoded.

Installation

Rust

# Core QVD read/write (zero dependencies)
[dependencies]
qvd = "0.4.2"

# With Parquet/Arrow support
[dependencies]
qvd = { version = "0.4.2", features = ["parquet_support"] }

# With DataFusion SQL support
[dependencies]
qvd = { version = "0.4.2", features = ["datafusion_support"] }

CLI

Install with cargo:

cargo install qvd --features cli

Or run without installing using uvx (requires Python and the qvdrs package):

uvx --from qvdrs qvd-cli inspect data.qvd
uvx --from qvdrs qvd-cli convert input.qvd output.parquet
uvx --from qvdrs qvd-cli filter large.qvd output.qvd --column %Type_ID --values 7,9

Python

pip install qvdrs

Or with uv:

uv pip install qvdrs

Quick Start — Rust

Read/Write QVD

use qvd::{read_qvd_file, write_qvd_file};

let table = read_qvd_file("data.qvd")?;
println!("Rows: {}, Cols: {}", table.num_rows(), table.num_cols());

// Byte-identical roundtrip
write_qvd_file(&table, "output.qvd")?;

Convert Parquet ↔ QVD

use qvd::{convert_parquet_to_qvd, convert_qvd_to_parquet, ParquetCompression};

// Parquet → QVD
convert_parquet_to_qvd("input.parquet", "output.qvd")?;

// QVD → Parquet (with zstd compression)
convert_qvd_to_parquet("input.qvd", "output.parquet", ParquetCompression::Zstd)?;

Arrow RecordBatch

use qvd::{read_qvd_file, qvd_to_record_batch, record_batch_to_qvd};

let table = read_qvd_file("data.qvd")?;
let batch = qvd_to_record_batch(&table)?;
// Use with DataFusion, DuckDB, Polars, etc.

// Arrow → QVD
let qvd_table = record_batch_to_qvd(&batch, "my_table")?;

DataFusion SQL (feature datafusion_support)

use datafusion::prelude::*;
use qvd::register_qvd;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let ctx = SessionContext::new();

    // Register QVD file as a table
    register_qvd(&ctx, "sales", "sales.qvd")?;

    // Run SQL queries directly on QVD data
    let df = ctx.sql("SELECT Region, SUM(Amount) as total
                      FROM sales
                      GROUP BY Region
                      ORDER BY total DESC").await?;
    df.show().await?;

    Ok(())
}

You can also register multiple QVD files and JOIN them:

register_qvd(&ctx, "orders", "orders.qvd")?;
register_qvd(&ctx, "customers", "customers.qvd")?;

let df = ctx.sql("SELECT c.Name, COUNT(o.OrderID) as order_count
                   FROM orders o
                   JOIN customers c ON o.CustomerID = c.CustomerID
                   GROUP BY c.Name").await?;

DuckDB via Arrow (Rust)

DuckDB can ingest Arrow RecordBatches directly — no file conversion needed:

use qvd::{read_qvd_file, qvd_to_record_batch};

let table = read_qvd_file("data.qvd")?;
let batch = qvd_to_record_batch(&table)?;

// Pass the Arrow RecordBatch to DuckDB via its Arrow interface
// See: https://docs.rs/duckdb/latest/duckdb/

Streaming Reader

use qvd::open_qvd_stream;

let mut reader = open_qvd_stream("huge_file.qvd")?;
println!("Total rows: {}", reader.total_rows());

while let Some(chunk) = reader.next_chunk(65536)? {
    // Process 65K rows at a time
    println!("Chunk: {} rows starting at {}", chunk.num_rows, chunk.start_row);
}

EXISTS() — O(1) Lookup

Like Qlik's EXISTS() function — build an index of unique values from one table and use it to check or filter another table in O(1) per row.

use qvd::{read_qvd_file, ExistsIndex, filter_rows_by_exists_fast};

// Build index from the "clients" table
let clients = read_qvd_file("clients.qvd")?;
let index = ExistsIndex::from_column(&clients, "ClientID").unwrap();

// O(1) lookup — does this value exist?
assert!(index.exists("12345"));
println!("Unique clients: {}", index.len());

// Filter another table — get row indices where ClientID exists in the clients table
let facts = read_qvd_file("facts.qvd")?;
let col_idx = 0; // index of "ClientID" column in facts table
let matching_rows = filter_rows_by_exists_fast(&facts, col_idx, &index);
println!("Matching rows: {}", matching_rows.len());

Streaming EXISTS() — Filtered Read (recommended for large files)

For large QVD files, use streaming read_filtered() instead of loading everything into memory. Only matching rows are loaded — 2.5x faster than Qlik Sense, uses dramatically less memory.

use qvd::{open_qvd_stream, ExistsIndex, write_qvd_file};

// 1. Build EXISTS index — from another table or from explicit values
let index = ExistsIndex::from_values(&["7", "9"]);

// 2. Open streaming reader (loads only symbol tables, not the full index table)
let mut stream = open_qvd_stream("large_table.qvd")?;

// 3. Stream + filter + select columns — only matching rows loaded into memory
let filtered = stream.read_filtered(
    "%Type_ID",                                     // filter column
    &index,                                         // EXISTS index
    Some(&["%Key_ID", "DateField_BK", "%Type_ID"]), // select columns (None = all)
    65536,                                          // chunk size
)?;
println!("Matched: {} rows x {} cols", filtered.num_rows(), filtered.num_cols());

// 4. Save result
write_qvd_file(&filtered, "output.qvd")?;

You can also build an EXISTS index from another QVD table's column:

let clients = read_qvd_file("clients.qvd")?;
let index = ExistsIndex::from_column(&clients, "ClientID").unwrap();
drop(clients); // free memory before opening the large file

let mut stream = open_qvd_stream("transactions.qvd")?;
let filtered = stream.read_filtered("ClientID", &index, None, 65536)?;

Quick Start — Python

Basic usage

import qvd

# Read QVD
table = qvd.read_qvd("data.qvd")
print(table.columns, table.num_rows)
print(table.head(5))

# Save QVD
table.save("output.qvd")

# Parquet ↔ QVD
qvd.convert_parquet_to_qvd("input.parquet", "output.qvd")
qvd.convert_qvd_to_parquet("input.qvd", "output.parquet", compression="zstd")

# Load Parquet as QvdTable
table = qvd.QvdTable.from_parquet("input.parquet")
table.save("output.qvd")
table.save_as_parquet("output.parquet", compression="snappy")

# EXISTS — O(1) lookup (like Qlik's EXISTS() function)
clients = qvd.read_qvd("clients.qvd")
idx = qvd.ExistsIndex(clients, "ClientID")

# Check if a value exists
print("12345" in idx)           # True/False
print(idx.exists("12345"))      # same thing
print(len(idx))                 # number of unique values

# Check multiple values at once
results = idx.exists_many(["12345", "67890", "99999"])
print(results)  # [True, True, False]

# Filter rows from another table — returns list of matching row indices
facts = qvd.read_qvd("facts.qvd")
matching_rows = qvd.filter_exists(facts, "ClientID", idx)
print(f"Matched {len(matching_rows)} rows out of {facts.num_rows}")

PyArrow

import qvd

# QVD → PyArrow RecordBatch (zero-copy via Arrow C Data Interface)
table = qvd.read_qvd("data.qvd")
batch = table.to_arrow()

# Or directly:
batch = qvd.read_qvd_to_arrow("data.qvd")

# PyArrow → QVD
table = qvd.QvdTable.from_arrow(batch, table_name="my_table")
table.save("output.qvd")

pandas

import qvd

# QVD → pandas DataFrame (via Arrow, zero-copy where possible)
df = qvd.read_qvd("data.qvd").to_pandas()

# Or directly:
df = qvd.read_qvd_to_pandas("data.qvd")

# pandas → QVD (via PyArrow round-trip)
import pyarrow as pa
batch = pa.RecordBatch.from_pandas(df)
table = qvd.QvdTable.from_arrow(batch, table_name="my_table")
table.save("output.qvd")

Polars

import qvd

# QVD → Polars DataFrame
df = qvd.read_qvd("data.qvd").to_polars()

# Or directly:
df = qvd.read_qvd_to_polars("data.qvd")

# Polars → QVD (via PyArrow round-trip)
batch = df.to_arrow()
table = qvd.QvdTable.from_arrow(batch, table_name="my_table")
table.save("output.qvd")

DuckDB (Python)

import qvd
import duckdb

# QVD → DuckDB (via Arrow, zero-copy)
batch = qvd.read_qvd_to_arrow("data.qvd")
result = duckdb.sql("SELECT * FROM batch WHERE amount > 100")

# Or query multiple QVD files:
sales = qvd.read_qvd_to_arrow("sales.qvd")
customers = qvd.read_qvd_to_arrow("customers.qvd")
result = duckdb.sql("""
    SELECT c.Name, SUM(s.Amount) as total
    FROM sales s
    JOIN customers c ON s.CustomerID = c.CustomerID
    GROUP BY c.Name
""")

CLI

Install with cargo:

cargo install qvd --features cli

Or run directly via uvx (no install needed):

uvx --from qvdrs qvd-cli <command> [args]

Convert between formats

# Parquet → QVD
qvd-cli convert input.parquet output.qvd

# QVD → Parquet (default compression: snappy)
qvd-cli convert input.qvd output.parquet

# QVD → Parquet with specific compression
qvd-cli convert input.qvd output.parquet --compression zstd
qvd-cli convert input.qvd output.parquet --compression gzip
qvd-cli convert input.qvd output.parquet --compression lz4
qvd-cli convert input.qvd output.parquet --compression none

# Rewrite QVD (re-generate from internal representation)
qvd-cli convert input.qvd output.qvd

# Recompress Parquet
qvd-cli convert input.parquet output.parquet --compression zstd

Inspect QVD metadata

qvd-cli inspect data.qvd

Output example:

File:       data.qvd
Size:       41.3 MB
Table:      SalesData
Rows:       465,810
Columns:    12
Created:    2024-01-15 10:30:00
Build:      14.0
RecordSize: 89 bytes
Read time:  0.50s

Column                         Symbols BitWidth   Bias FmtType  Tags
--------------------------------------------------------------------------------
OrderID                         465810        20      0      0  $numeric, $integer
CustomerID                       12500        14      0      0  $numeric, $integer
Region                               5         3      0      0  $text
Amount                          389201        19      0      2  $numeric

Preview rows

# Show first 10 rows (default)
qvd-cli head data.qvd

# Show first 50 rows
qvd-cli head data.qvd --rows 50

Filter rows with EXISTS() (streaming)

# Filter by column value(s) — streaming, memory-efficient
qvd-cli filter large.qvd output.qvd --column %Type_ID --values 7,9

# Filter + select only specific columns
qvd-cli filter large.qvd output.qvd --column %Type_ID --values 7,9 \
    --select "%Key_ID,DateField_BK,%Type_ID"

# Filter and save as Parquet
qvd-cli filter large.qvd output.parquet --column %Type_ID --values 7,9 \
    --select "%Key_ID,DateField_BK,%Type_ID" --compression zstd

Show Arrow schema

qvd-cli schema data.qvd

Output example:

Arrow Schema for 'data.qvd':

  OrderID                        Int64
  CustomerID                     Int64
  Region                         Utf8
  Amount                         Float64 (nullable)
  OrderDate                      Date32

Architecture

src/
├── lib.rs          — public API, re-exports
├── error.rs        — error types (QvdError, QvdResult)
├── header.rs       — XML header parser/writer (custom, zero-dep)
├── value.rs        — QVD data types (QvdSymbol, QvdValue)
├── symbol.rs       — symbol table binary reader/writer
├── index.rs        — index table bit-stuffing reader/writer
├── reader.rs       — high-level QVD reader
├── writer.rs       — high-level QVD writer + QvdTableBuilder
├── exists.rs       — ExistsIndex with HashSet + filter functions
├── streaming.rs    — streaming chunk-based QVD reader with filtered reads
├── parquet.rs      — Parquet/Arrow ↔ QVD conversion (optional)
├── datafusion.rs   — DataFusion TableProvider for SQL on QVD (optional)
├── python.rs       — PyO3 bindings with PyArrow/pandas/Polars (optional)
└── bin/qvd.rs      — CLI binary (optional)

Feature Flags

Feature Dependencies Description
(default) none Core QVD read/write
parquet_support arrow, parquet, chrono Parquet/Arrow conversion
datafusion_support + datafusion, tokio SQL queries on QVD via DataFusion
cli + clap CLI binary
python + pyo3, arrow/pyarrow Python bindings with PyArrow/pandas/Polars

Author

Stanislav Chernov (@bintocher)

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qvdrs-0.4.2.tar.gz (73.9 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

qvdrs-0.4.2-cp313-cp313-win_amd64.whl (3.2 MB view details)

Uploaded CPython 3.13Windows x86-64

qvdrs-0.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.1 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

qvdrs-0.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (3.9 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ ARM64

qvdrs-0.4.2-cp313-cp313-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

qvdrs-0.4.2-cp313-cp313-macosx_10_12_x86_64.whl (3.6 MB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

File details

Details for the file qvdrs-0.4.2.tar.gz.

File metadata

  • Download URL: qvdrs-0.4.2.tar.gz
  • Upload date:
  • Size: 73.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for qvdrs-0.4.2.tar.gz
Algorithm Hash digest
SHA256 5de9b6c055fe09200a2d6b8ed59e3d8dd8b1d63db87df51908f713140fb90eba
MD5 50aae072810bf94ad02af173553f78a3
BLAKE2b-256 2c35bdc805ace030c83c2d748f8d84af8ec80da2ffedce2c80e867355251312a

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.2-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: qvdrs-0.4.2-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.12.6

File hashes

Hashes for qvdrs-0.4.2-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 f42884290459e72f24cde2f230cbac5708e13e99b8b3605768960bccacc928ca
MD5 5a452e1e5cf7e9314c0708a796b0f2b3
BLAKE2b-256 d38a9da6ee23f8605e31f0c08f9b72818e7bd29e53e3af90e57da4c7fba55cde

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ea9be87eca9966f60ccb8438f1f1857e3b97a970e8463c56abeee64382b71ccb
MD5 994778be37d71e88d9b5802e386aba72
BLAKE2b-256 0b0f809a3e3c5131d71f4f3f242b2895334f1773da0bdcee61e8a234a4e786eb

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 17718c9cee6065ed8391b967ae3c137ffec0e7f7915caf1725c02a5dc69d9b89
MD5 4fe11cd78f8dee559c800430853e4d3c
BLAKE2b-256 fe5a9db72ba2626050920b79365f2f6a768a8f39408399f18f2e95bbb95b82e1

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.2-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.2-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 4cf5b135930cd04fb63e9319ea2b274c2d6d7fe22755b18ece7c00af705efea7
MD5 c2686ff1c3a1e79bf2b695b0c599288b
BLAKE2b-256 de0b9d3a07fe31c66968b9cb6ac9b7e0d3508a24356275536aeb35ac21f5c67a

See more details on using hashes here.

File details

Details for the file qvdrs-0.4.2-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for qvdrs-0.4.2-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 32933e8c9d1d1ecef25248c00fe26097aa3f618ef159905485d7127a293bac2a
MD5 e05f6b0f75704bca56b2a3da222bd0c5
BLAKE2b-256 78c19d494c7bf1ca45adf5909f758004ea03519fbf7ae72f2bae413be504ce63

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page