Skip to main content

Parquet Metadata Reader

Project description

rugo

License Python Version PyPI Downloads

rugo is a C++17 and Cython powered file reader for Python. It delivers high-throughput reading for both Parquet files (metadata inspection and experimental column reader) and JSON Lines files (with schema inference, projection pushdown, and SIMD optimizations). The data-reading API is evolving rapidly and will change in upcoming releases.

Key Features

  • Parquet: Fast metadata extraction backed by an optimized C++17 parser and thin Python bindings.
  • Parquet: Complete schema and row-group details, including encodings, codecs, offsets, bloom filter pointers, and custom key/value metadata.
  • Parquet: Experimental memory-based data reading for PLAIN and RLE_DICTIONARY encoded columns with UNCOMPRESSED, SNAPPY, and ZSTD codecs.
  • JSON Lines: High-performance columnar reader with schema inference, projection pushdown, and SIMD optimizations (19% faster).
  • JSON Lines: Memory-based processing for zero-copy parsing.
  • Works with file paths, byte strings, and contiguous memoryviews.
  • Optional schema conversion helpers for Orso.
  • No runtime dependencies beyond the Python standard library.

Installation

PyPI

pip install rugo

# Optional extras
pip install rugo[orso]
pip install rugo[dev]

From source

git clone https://github.com/mabel-dev/rugo.git
cd rugo
python -m venv .venv
source .venv/bin/activate
make update
make compile
pip install -e .

Requirements

  • Python 3.9 or newer
  • A C++17 compatible compiler (clang, gcc, or MSVC)
  • Cython and setuptools for source builds (installed by the commands above)
  • On x86-64 platforms, an assembler capable of compiling .S sources (bundled with modern GCC/Clang toolchains)
  • ARM/AArch64 platforms (including Apple Silicon) are fully supported with NEON SIMD optimizations

Quickstart

import rugo.parquet as parquet_meta

metadata = parquet_meta.read_metadata("example.parquet")

print(f"Rows: {metadata['num_rows']}")
print("Schema columns:")
for column in metadata["schema_columns"]:
    print(f"  {column['name']}: {column['physical_type']} ({column['logical_type']})")

first_row_group = metadata["row_groups"][0]
for column in first_row_group["columns"]:
    print(
        f"{column['name']}: codec={column['compression_codec']}, "
        f"nulls={column['null_count']}, range=({column['min']}, {column['max']})"
    )

read_metadata returns dictionaries composed of Python primitives, ready for JSON serialisation or downstream processing.

Returned metadata layout

{
    "num_rows": int,
    "schema_columns": [
        {
            "name": str,
            "physical_type": str,
            "logical_type": str,
            "nullable": bool,
        },
        ...
    ],
    "row_groups": [
        {
            "num_rows": int,
            "total_byte_size": int,
            "columns": [
                {
                    "name": str,
                    "path_in_schema": str,
                    "physical_type": str,
                    "logical_type": str,
                    "num_values": Optional[int],
                    "total_uncompressed_size": Optional[int],
                    "total_compressed_size": Optional[int],
                    "data_page_offset": Optional[int],
                    "index_page_offset": Optional[int],
                    "dictionary_page_offset": Optional[int],
                    "min": Any,
                    "max": Any,
                    "null_count": Optional[int],
                    "distinct_count": Optional[int],
                    "bloom_offset": Optional[int],
                    "bloom_length": Optional[int],
                    "encodings": List[str],
                    "compression_codec": Optional[str],
                    "key_value_metadata": Optional[Dict[str, str]],
                },
                ...
            ],
        },
        ...
    ],
}

Fields that are not present in the source Parquet file are reported as None. Minimum and maximum values are decoded into Python types when possible; otherwise hexadecimal strings are returned.

Parsing options

All entry points share the same keyword arguments:

  • schema_only (default False): return only the top-level schema without row group details.
  • include_statistics (default True): skip min/max/num_values decoding when set to False.
  • max_row_groups (default -1): limit the number of row groups inspected; handy for very large files.
metadata = parquet_meta.read_metadata(
    "large_file.parquet",
    schema_only=False,
    include_statistics=False,
    max_row_groups=2,
)

Working with in-memory data

with open("example.parquet", "rb") as fh:
    data = fh.read()

from_bytes = parquet_meta.read_metadata_from_bytes(data)
from_view = parquet_meta.read_metadata_from_memoryview(memoryview(data))

read_metadata_from_memoryview performs zero-copy parsing when given a contiguous buffer.

Prototype Data Decoding (Experimental)

API stability: The column-reading functions are experimental and will change without notice while we expand format coverage.

rugo includes a prototype decoder for reading actual column data from Parquet files. This is a limited, experimental feature designed for simple use cases and testing.

Supported Features

  • ✅ UNCOMPRESSED, SNAPPY, and ZSTD codecs
  • ✅ PLAIN encoding
  • ✅ RLE_DICTIONARY encoding
  • int32, int64, float32, float64, boolean, and string (byte_array) types
  • ✅ Memory-based processing (load once, decode multiple times)
  • ✅ Column selection (decode only the columns you need)
  • ✅ Multi-row-group support

Unsupported Features

  • ❌ Other codecs (GZIP, LZ4, LZO, BROTLI, etc.)
  • ❌ Delta encoding, PLAIN_DICTIONARY, other advanced encodings
  • ❌ Nullable columns with definition levels > 0
  • ❌ Other types (int96, fixed_len_byte_array, date, timestamp, complex types)
  • ❌ Nested structures (lists, maps, structs)

Primary API: Memory-Based Reading

The recommended approach loads Parquet data into memory once and performs all operations on the in-memory buffer:

import rugo.parquet as rp

# Load file into memory once
with open("data.parquet", "rb") as f:
    parquet_data = f.read()

# Check if the data can be decoded
if rp.can_decode_from_memory(parquet_data):
    
    # Read ALL columns from all row groups
    table = rp.read_parquet(parquet_data)
    
    # Or read SPECIFIC columns only
    table = rp.read_parquet(parquet_data, ["name", "age", "salary"])
    
    # Access the structured data
    print(f"Columns: {table['column_names']}")
    print(f"Row groups: {len(table['row_groups'])}")
    
    # Iterate through row groups and columns
    for rg_idx, row_group in enumerate(table['row_groups']):
        print(f"Row group {rg_idx}:")
        for col_idx, column_data in enumerate(row_group):
            col_name = table['column_names'][col_idx]
            if column_data is not None:
                print(f"  {col_name}: {len(column_data)} values")
            else:
                print(f"  {col_name}: Failed to decode")

Data Structure

The read_parquet() function returns a dictionary with this structure:

{
    'success': bool,                    # True if reading succeeded
    'column_names': ['col1', 'col2'],   # List of column names
    'row_groups': [                     # List of row groups
        [col1_data, col2_data],         # Row group 0: list of columns
        [col1_data, col2_data],         # Row group 1: list of columns
        # ... more row groups
    ]
}

Each column's data is a Python list containing the decoded values.

Performance Benefits

Traditional Approach (Multiple File I/O):

# Each operation reads the file separately
metadata = rp.read_metadata("file.parquet")       # File I/O #1
col1 = rp.decode_column("file.parquet", "col1")   # File I/O #2  
col2 = rp.decode_column("file.parquet", "col2")   # File I/O #3

Memory-Based Approach (Single File I/O):

# Load once, process multiple times
with open("file.parquet", "rb") as f:
    data = f.read()  # File I/O #1 (only)

table = rp.read_parquet(data, ["col1", "col2"])   # In-memory processing

Legacy File-Based API

For backward compatibility, file-based functions are still available:

# Check if a file can be decoded
if rp.can_decode("data.parquet"):
    # Decode a specific column from first row group only
    values = rp.decode_column("data.parquet", "column_name")
    print(values)  # e.g., [1, 2, 3, 4, 5] or ['a', 'b', 'c']

Use Cases

The memory-based API is optimized for:

  • Query engines with metadata-driven pruning
  • ETL pipelines processing multiple Parquet files
  • Data exploration where you need to examine various columns
  • High-performance scenarios minimizing I/O operations

See examples/memory_based_api_example.py and examples/optional_columns_example.py for complete demonstrations.

Note: This decoder is a prototype for educational and testing purposes. For production use with full Parquet support, use PyArrow or FastParquet.

JSON Lines Reading

rugo includes a high-performance JSON Lines reader with schema inference, projection pushdown, and SIMD optimizations.

Features

  • ✅ Fast columnar reading with C++17 implementation and SIMD optimizations
  • 19% performance improvement from SIMD optimizations (AVX2/SSE2)
  • ✅ Automatic schema inference from JSON data
  • ✅ Projection pushdown (read only needed columns)
  • ✅ Support for int64, double, string, and boolean types
  • ✅ Native null value handling
  • ✅ Memory-based processing (zero-copy parsing)
  • ✅ Orso schema conversion

Quick Example

import rugo.jsonl as rj

# Sample JSON Lines data
data = b'''{"id": 1, "name": "Alice", "age": 30, "salary": 50000.0}
{"id": 2, "name": "Bob", "age": 25, "salary": 45000.0}
{"id": 3, "name": "Charlie", "age": 35, "salary": 55000.0}'''

# Get schema
schema = rj.get_jsonl_schema(data)
print(f"Columns: {[col['name'] for col in schema]}")
# Output: Columns: ['id', 'name', 'age', 'salary']

# Read all columns
result = rj.read_jsonl(data)
print(f"Read {result['num_rows']} rows with {len(result['columns'])} columns")

# Read with projection (only specific columns)
result = rj.read_jsonl(data, columns=['name', 'salary'])
# Only reads 'name' and 'salary' - projection pushdown!

Working with Files

import rugo.jsonl as rj

# Load file into memory
with open("data.jsonl", "rb") as f:
    jsonl_data = f.read()

# Extract schema
schema = rj.get_jsonl_schema(jsonl_data, sample_size=1000)

# Read specific columns only
result = rj.read_jsonl(jsonl_data, columns=['user_id', 'email', 'score'])

# Access columnar data
for i in range(result['num_rows']):
    user_id = result['columns'][0][i]
    email = result['columns'][1][i]
    score = result['columns'][2][i]
    print(f"User {user_id}: {email} - Score: {score}")

Orso Integration

import rugo.jsonl as rj
from rugo.converters.orso import jsonl_to_orso_schema

# Get JSON Lines schema
jsonl_schema = rj.get_jsonl_schema(data)

# Convert to Orso schema
orso_schema = jsonl_to_orso_schema(jsonl_schema, schema_name="my_table")
print(f"Schema: {orso_schema.name}")
for col in orso_schema.columns:
    print(f"  {col.name}: {col.type}")

Performance

The JSON Lines reader achieves approximately 109K-201K rows/second on wide tables (50 columns), with higher throughput on narrower tables. With SIMD optimizations (AVX2/SSE2), the reader delivers:

  • Full read (50 cols): ~109K rows/second
  • Projection (10 cols): ~174-191K rows/second
  • Projection (5 cols): ~181-201K rows/second
  • Performance improvement: 19% faster with SIMD optimizations

The SIMD implementation uses:

  • AVX2: Processes 32 bytes at once for newline detection and text parsing (preferred)
  • SSE2: Processes 16 bytes at once (fallback)
  • Scalar fallback: Byte-by-byte processing for non-x86 architectures

Comparison with Opteryx

On 50-column datasets, rugo is 2.7-5.6x faster than Opteryx 0.25.1 (release):

  • Full read: 2.7-3.1x faster
  • Projection (10 cols): 3.8-5.4x faster
  • Projection (5 cols): 3.9-5.6x faster

Note: These benchmarks compare against Opteryx 0.25.1 (PyPI release) which uses a Python-based decoder with csimdjson. The main branch (0.26.0+) includes a new Cython-based fast decoder with SIMD optimizations that is expected to be significantly faster.

rugo's advantages:

  • True projection pushdown: Only parse columns you need
  • Memory-based: No file I/O overhead
  • Zero-copy design: Direct memory-to-column conversion
  • Consistent performance: Maintains throughput across dataset sizes

See PERFORMANCE_COMPARISON.md for detailed benchmark results, JSONL_SIMD_OPTIMIZATIONS.md for SIMD optimization details, and OPTERYX_DECODER_ANALYSIS.md for a technical analysis of Opteryx's Cython decoder and potential improvements.

See examples/read_jsonl.py and benchmarks/compare_opteryx_performance.py for complete demonstrations.

Optional Orso conversion

Install the optional extra (pip install rugo[orso]) to enable Orso helpers:

from rugo.converters.orso import extract_schema_only, rugo_to_orso_schema, jsonl_to_orso_schema

# Parquet to Orso
metadata = parquet_meta.read_metadata("example.parquet")
relation = rugo_to_orso_schema(metadata, "example_table")
schema_info = extract_schema_only(metadata)

# JSON Lines to Orso
import rugo.jsonl as rj
jsonl_schema = rj.get_jsonl_schema(data)
relation = jsonl_to_orso_schema(jsonl_schema, "jsonl_table")

See examples/orso_conversion.py and examples/jsonl_orso_conversion.py for complete walkthroughs.

Development

make update     # install build and test tooling (uses uv under the hood)
make compile    # rebuild the Cython extension with -O3 and C++17 flags
make test       # run pytest-based validation (includes PyArrow comparisons)
make lint       # run ruff, isort, pycln, cython-lint
make mypy       # type checking

make compile clears previous build artefacts before rebuilding the extension in-place.

Project layout

rugo/
├── rugo/__init__.py
├── rugo/parquet/
│   ├── parquet_reader.pyx
│   ├── parquet_reader.pxd
│   ├── parquet_reader.cpp
│   ├── metadata.cpp
│   ├── metadata.hpp
│   ├── bloom_filter.cpp
│   ├── decode.cpp
│   ├── decode.hpp
│   ├── compression.cpp
│   ├── compression.hpp
│   ├── thrift.hpp
│   └── vendor/
├── rugo/jsonl_src/
│   ├── jsonl.pyx
│   ├── jsonl.pxd
│   ├── jsonl_reader.cpp
│   └── jsonl_reader.hpp
├── rugo/converters/orso.py
├── examples/
│   ├── read_parquet_metadata.py
│   ├── read_parquet_data.py
│   ├── read_jsonl.py
│   ├── jsonl_orso_conversion.py
│   ├── create_test_file.py
│   └── orso_conversion.py
├── scripts/
│   ├── generate_test_parquet.py
│   └── vendor_compression_libs.py
├── tests/
│   ├── data/
│   ├── test_all_metadata_fields.py
│   ├── test_bloom_filter.py
│   ├── test_decode.py
│   ├── test_jsonl.py
│   ├── test_jsonl_performance.py
│   ├── test_logical_types.py
│   ├── test_orso_converter.py
│   ├── test_statistics.py
│   └── requirements.txt
├── Makefile
├── pyproject.toml
├── setup.py
└── README.md

Status and limitations

  • Active development status (alpha); APIs are evolving and may change between releases.
  • Parquet: Metadata APIs are largely stable. The column-reading API is experimental and will change.
  • JSON Lines: High-performance reader with SIMD optimizations (19% improvement) and basic type support (int64, double, string, boolean).
  • Requires a C++17 compiler when installing from source or editing the Cython bindings.
  • SIMD optimizations (AVX2/SSE2) are automatically enabled on x86-64 platforms.
  • Bloom filter information is exposed via offsets and lengths; higher-level helpers are planned.

License

Licensed under the Apache License 2.0. See LICENSE for full terms.

Maintainer

Created and maintained by Justin Joyce (@joocer). Contributions are welcome via issues and pull requests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rugo-0.1.13.tar.gz (447.9 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

rugo-0.1.13-cp312-cp312-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.12musllinux: musl 1.1+ x86-64

rugo-0.1.13-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

rugo-0.1.13-cp312-cp312-macosx_11_0_arm64.whl (354.6 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

rugo-0.1.13-cp311-cp311-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.11musllinux: musl 1.1+ x86-64

rugo-0.1.13-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

rugo-0.1.13-cp311-cp311-macosx_11_0_arm64.whl (354.0 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

rugo-0.1.13-cp310-cp310-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.10musllinux: musl 1.1+ x86-64

rugo-0.1.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

rugo-0.1.13-cp310-cp310-macosx_11_0_arm64.whl (351.3 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

rugo-0.1.13-cp39-cp39-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.9musllinux: musl 1.1+ x86-64

rugo-0.1.13-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

rugo-0.1.13-cp39-cp39-macosx_11_0_arm64.whl (352.1 kB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

File details

Details for the file rugo-0.1.13.tar.gz.

File metadata

  • Download URL: rugo-0.1.13.tar.gz
  • Upload date:
  • Size: 447.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.13.tar.gz
Algorithm Hash digest
SHA256 1682940671eca3d39129882b7f7c9f011f75cd3645f030b7a509c2f1bfbdee04
MD5 e011a91279ca3d852858f843ba96b8a6
BLAKE2b-256 e4f9f499b22eebc1536f00925670a3219bbfe454910a3e048809cc4d8310285a

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13.tar.gz:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp312-cp312-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp312-cp312-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 f7657ff47af616cd466541b1e90b344d2b52eaf8152543825a2130cd5bcffeff
MD5 88babb7675df1bb14ba7b1dd06a79782
BLAKE2b-256 c7f8bd560d65e8f0f250c2857a496f123746f6405d8b2db1d3ca11a6e7b1f28f

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp312-cp312-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 17ce552f6883c7deb127503aa39bb20bc49ebe42e7637e21cebf17945e4a0b3c
MD5 631f50dbd60aeb66d2eefde3cfe4c34e
BLAKE2b-256 331b621466598dd84605e5fb53ae1180956e754d1a212063d2dd4fdd0752d6db

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8d736879bb7abe4d0f2807de7843fc904f1a4dd76fc1c71295fbe636bbf4b952
MD5 f4d958a3f832ea8c430365a7aa01f7c5
BLAKE2b-256 1ec277b6ae29994c8a9b0330a7124f2743feb6a16971a00d92d34163337e83be

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp311-cp311-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp311-cp311-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 e38df11b82a6fb32abe7bdfadd36f9c0ae5f0f04baf313bd133207740557225c
MD5 114db07a057385db7da32fcbc7f83701
BLAKE2b-256 d8b00afe9cefd11beff4d6553067380c9ab14a3caf97d1b4dead38b9dc0e8d87

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp311-cp311-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 641b59b7d612cc7a7d0f650880719a2e7ed628bfc4de2ad5e25da5b234c9f0db
MD5 9a0b8a315cbe1227027237b4f31d54f2
BLAKE2b-256 058ec441d650ed66d9ddc6918d1d376c260fffe3c4f126112c7675b30c8a8496

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 629a4203cd21c67c39e65d55bda481d59af12f8311527de3dfad369ccf71f262
MD5 994d0dea33559b3bc0d5de2b405deebb
BLAKE2b-256 6253daca0f1c958c4e08c5d5a2cdd705319c7a5cdfce5ea81138ebcc0f511ce3

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp310-cp310-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp310-cp310-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 64adf9d926244a0558d5101f6b96434ff032ccc66830116bc130e9a3a7327e71
MD5 4445ef1c2ebc4f751d4989d66b1d3e9f
BLAKE2b-256 22d4a3570e85b42b1152afb642e7ffe28b4ba511080f9e525fb6e399cfbaa1f5

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp310-cp310-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9ff6a533189ae2c27e8df7db12412c5e34af6a33c692ff53a6114b4b060546c7
MD5 22fa9565e67c7b5be795645a69dc6cbd
BLAKE2b-256 cbdfa20e112bba7e5007af53974f3bcb356bc6d5ebcb3f5b62e60c00114b52cb

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 016eeffc641e6d7cd471b300f111bf245f508c29c693d293465415cf11c4122c
MD5 68ce42957eee677b2b564d46c31d6045
BLAKE2b-256 289eaec5d9af94f4defe64f6a1bc739b933a67a86c8390ac52b9a54e06009da1

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp310-cp310-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp39-cp39-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp39-cp39-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 4fc6421ca2fb7f4d4584097230fe7ab2bec2d1c5efd7373bd62ce72956250aef
MD5 f871bac3ebddc08ed7e28ee2a698a5be
BLAKE2b-256 6846f0f1296c0f887ad3f38a889fb2c4fee6181dc991fcb076079056709d0be3

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp39-cp39-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.13-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b037882a7d7ee04fa37e1f1bbb55e80e4741b31a9e79c94fa0a68dc9f3bd9b30
MD5 9825d4258712ede4cdd70fd75d08f9e6
BLAKE2b-256 d5d1d2ac8def02a30801cfed3053065641d4f2619b461a033605be21a4562131

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.13-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

  • Download URL: rugo-0.1.13-cp39-cp39-macosx_11_0_arm64.whl
  • Upload date:
  • Size: 352.1 kB
  • Tags: CPython 3.9, macOS 11.0+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.13-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 da3ee70e072dd6fc91547509134da79457073e032a410858dd7a5e05a8759fa1
MD5 34e5e7470eb59e1fcf21d76410a8e114
BLAKE2b-256 1a8039955ed30fe24f7cb1629e50316dc20c66f951de6af22c329d843027ec50

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.13-cp39-cp39-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page