Skip to main content

Parquet Metadata Reader

Project description

rugo

License Python Version PyPI Downloads

rugo is a C++17 and Cython powered file reader for Python. It delivers high-throughput reading for both Parquet files (metadata inspection and experimental column reader) and JSON Lines files (with schema inference, projection pushdown, and SIMD optimizations). The data-reading API is evolving rapidly and will change in upcoming releases.

Key Features

  • Parquet: Fast metadata extraction backed by an optimized C++17 parser and thin Python bindings.
  • Parquet: Complete schema and row-group details, including encodings, codecs, offsets, bloom filter pointers, and custom key/value metadata.
  • Parquet: Experimental memory-based data reading for PLAIN and RLE_DICTIONARY encoded columns with UNCOMPRESSED, SNAPPY, and ZSTD codecs.
  • JSON Lines: High-performance columnar reader with schema inference, projection pushdown, and SIMD optimizations (19% faster).
  • JSON Lines: Memory-based processing for zero-copy parsing.
  • Works with file paths, byte strings, and contiguous memoryviews.
  • Optional schema conversion helpers for Orso.
  • No runtime dependencies beyond the Python standard library.

Installation

PyPI

pip install rugo

# Optional extras
pip install rugo[orso]
pip install rugo[dev]

From source

git clone https://github.com/mabel-dev/rugo.git
cd rugo
python -m venv .venv
source .venv/bin/activate
make update
make compile
pip install -e .

Requirements

  • Python 3.9 or newer
  • A C++17 compatible compiler (clang, gcc, or MSVC)
  • Cython and setuptools for source builds (installed by the commands above)
  • On x86-64 platforms, an assembler capable of compiling .S sources (bundled with modern GCC/Clang toolchains)
  • ARM/AArch64 platforms (including Apple Silicon) are fully supported with NEON SIMD optimizations

Quickstart

import rugo.parquet as parquet_meta

metadata = parquet_meta.read_metadata("example.parquet")

print(f"Rows: {metadata['num_rows']}")
print("Schema columns:")
for column in metadata["schema_columns"]:
    print(f"  {column['name']}: {column['physical_type']} ({column['logical_type']})")

first_row_group = metadata["row_groups"][0]
for column in first_row_group["columns"]:
    print(
        f"{column['name']}: codec={column['compression_codec']}, "
        f"nulls={column['null_count']}, range=({column['min']}, {column['max']})"
    )

read_metadata returns dictionaries composed of Python primitives, ready for JSON serialisation or downstream processing.

Returned metadata layout

{
    "num_rows": int,
    "schema_columns": [
        {
            "name": str,
            "physical_type": str,
            "logical_type": str,
            "nullable": bool,
        },
        ...
    ],
    "row_groups": [
        {
            "num_rows": int,
            "total_byte_size": int,
            "columns": [
                {
                    "name": str,
                    "path_in_schema": str,
                    "physical_type": str,
                    "logical_type": str,
                    "num_values": Optional[int],
                    "total_uncompressed_size": Optional[int],
                    "total_compressed_size": Optional[int],
                    "data_page_offset": Optional[int],
                    "index_page_offset": Optional[int],
                    "dictionary_page_offset": Optional[int],
                    "min": Any,
                    "max": Any,
                    "null_count": Optional[int],
                    "distinct_count": Optional[int],
                    "bloom_offset": Optional[int],
                    "bloom_length": Optional[int],
                    "encodings": List[str],
                    "compression_codec": Optional[str],
                    "key_value_metadata": Optional[Dict[str, str]],
                },
                ...
            ],
        },
        ...
    ],
}

Fields that are not present in the source Parquet file are reported as None. Minimum and maximum values are decoded into Python types when possible; otherwise hexadecimal strings are returned.

Parsing options

All entry points share the same keyword arguments:

  • schema_only (default False): return only the top-level schema without row group details.
  • include_statistics (default True): skip min/max/num_values decoding when set to False.
  • max_row_groups (default -1): limit the number of row groups inspected; handy for very large files.
metadata = parquet_meta.read_metadata(
    "large_file.parquet",
    schema_only=False,
    include_statistics=False,
    max_row_groups=2,
)

Working with in-memory data

with open("example.parquet", "rb") as fh:
    data = fh.read()

from_bytes = parquet_meta.read_metadata_from_bytes(data)
from_view = parquet_meta.read_metadata_from_memoryview(memoryview(data))

read_metadata_from_memoryview performs zero-copy parsing when given a contiguous buffer.

Prototype Data Decoding (Experimental)

API stability: The column-reading functions are experimental and will change without notice while we expand format coverage.

rugo includes a prototype decoder for reading actual column data from Parquet files. This is a limited, experimental feature designed for simple use cases and testing.

Supported Features

  • ✅ UNCOMPRESSED, SNAPPY, and ZSTD codecs
  • ✅ PLAIN encoding
  • ✅ RLE_DICTIONARY encoding
  • int32, int64, float32, float64, boolean, and string (byte_array) types
  • ✅ Memory-based processing (load once, decode multiple times)
  • ✅ Column selection (decode only the columns you need)
  • ✅ Multi-row-group support

Unsupported Features

  • ❌ Other codecs (GZIP, LZ4, LZO, BROTLI, etc.)
  • ❌ Delta encoding, PLAIN_DICTIONARY, other advanced encodings
  • ❌ Nullable columns with definition levels > 0
  • ❌ Other types (int96, fixed_len_byte_array, date, timestamp, complex types)
  • ❌ Nested structures (lists, maps, structs)

Primary API: Memory-Based Reading

The recommended approach loads Parquet data into memory once and performs all operations on the in-memory buffer:

import rugo.parquet as rp

# Load file into memory once
with open("data.parquet", "rb") as f:
    parquet_data = f.read()

# Check if the data can be decoded
if rp.can_decode_from_memory(parquet_data):
    
    # Read ALL columns from all row groups
    table = rp.read_parquet(parquet_data)
    
    # Or read SPECIFIC columns only
    table = rp.read_parquet(parquet_data, ["name", "age", "salary"])
    
    # Access the structured data
    print(f"Columns: {table['column_names']}")
    print(f"Row groups: {len(table['row_groups'])}")
    
    # Iterate through row groups and columns
    for rg_idx, row_group in enumerate(table['row_groups']):
        print(f"Row group {rg_idx}:")
        for col_idx, column_data in enumerate(row_group):
            col_name = table['column_names'][col_idx]
            if column_data is not None:
                print(f"  {col_name}: {len(column_data)} values")
            else:
                print(f"  {col_name}: Failed to decode")

Data Structure

The read_parquet() function returns a dictionary with this structure:

{
    'success': bool,                    # True if reading succeeded
    'column_names': ['col1', 'col2'],   # List of column names
    'row_groups': [                     # List of row groups
        [col1_data, col2_data],         # Row group 0: list of columns
        [col1_data, col2_data],         # Row group 1: list of columns
        # ... more row groups
    ]
}

Each column's data is a Python list containing the decoded values.

Performance Benefits

Traditional Approach (Multiple File I/O):

# Each operation reads the file separately
metadata = rp.read_metadata("file.parquet")       # File I/O #1
col1 = rp.decode_column("file.parquet", "col1")   # File I/O #2  
col2 = rp.decode_column("file.parquet", "col2")   # File I/O #3

Memory-Based Approach (Single File I/O):

# Load once, process multiple times
with open("file.parquet", "rb") as f:
    data = f.read()  # File I/O #1 (only)

table = rp.read_parquet(data, ["col1", "col2"])   # In-memory processing

Legacy File-Based API

For backward compatibility, file-based functions are still available:

# Check if a file can be decoded
if rp.can_decode("data.parquet"):
    # Decode a specific column from first row group only
    values = rp.decode_column("data.parquet", "column_name")
    print(values)  # e.g., [1, 2, 3, 4, 5] or ['a', 'b', 'c']

Use Cases

The memory-based API is optimized for:

  • Query engines with metadata-driven pruning
  • ETL pipelines processing multiple Parquet files
  • Data exploration where you need to examine various columns
  • High-performance scenarios minimizing I/O operations

See examples/memory_based_api_example.py and examples/optional_columns_example.py for complete demonstrations.

Note: This decoder is a prototype for educational and testing purposes. For production use with full Parquet support, use PyArrow or FastParquet.

JSON Lines Reading

rugo includes a high-performance JSON Lines reader with schema inference, projection pushdown, and SIMD optimizations.

Features

  • ✅ Fast columnar reading with C++17 implementation and SIMD optimizations
  • 19% performance improvement from SIMD optimizations (AVX2/SSE2)
  • ✅ Automatic schema inference from JSON data
  • ✅ Projection pushdown (read only needed columns)
  • ✅ Support for int64, double, string, and boolean types
  • ✅ Native null value handling
  • ✅ Memory-based processing (zero-copy parsing)
  • ✅ Orso schema conversion

Quick Example

import rugo.jsonl as rj

# Sample JSON Lines data
data = b'''{"id": 1, "name": "Alice", "age": 30, "salary": 50000.0}
{"id": 2, "name": "Bob", "age": 25, "salary": 45000.0}
{"id": 3, "name": "Charlie", "age": 35, "salary": 55000.0}'''

# Get schema
schema = rj.get_jsonl_schema(data)
print(f"Columns: {[col['name'] for col in schema]}")
# Output: Columns: ['id', 'name', 'age', 'salary']

# Read all columns
result = rj.read_jsonl(data)
print(f"Read {result['num_rows']} rows with {len(result['columns'])} columns")

# Read with projection (only specific columns)
result = rj.read_jsonl(data, columns=['name', 'salary'])
# Only reads 'name' and 'salary' - projection pushdown!

Working with Files

import rugo.jsonl as rj

# Load file into memory
with open("data.jsonl", "rb") as f:
    jsonl_data = f.read()

# Extract schema
schema = rj.get_jsonl_schema(jsonl_data, sample_size=1000)

# Read specific columns only
result = rj.read_jsonl(jsonl_data, columns=['user_id', 'email', 'score'])

# Access columnar data
for i in range(result['num_rows']):
    user_id = result['columns'][0][i]
    email = result['columns'][1][i]
    score = result['columns'][2][i]
    print(f"User {user_id}: {email} - Score: {score}")

Orso Integration

import rugo.jsonl as rj
from rugo.converters.orso import jsonl_to_orso_schema

# Get JSON Lines schema
jsonl_schema = rj.get_jsonl_schema(data)

# Convert to Orso schema
orso_schema = jsonl_to_orso_schema(jsonl_schema, schema_name="my_table")
print(f"Schema: {orso_schema.name}")
for col in orso_schema.columns:
    print(f"  {col.name}: {col.type}")

Performance

The JSON Lines reader achieves approximately 109K-201K rows/second on wide tables (50 columns), with higher throughput on narrower tables. With SIMD optimizations (AVX2/SSE2), the reader delivers:

  • Full read (50 cols): ~109K rows/second
  • Projection (10 cols): ~174-191K rows/second
  • Projection (5 cols): ~181-201K rows/second
  • Performance improvement: 19% faster with SIMD optimizations

The SIMD implementation uses:

  • AVX2: Processes 32 bytes at once for newline detection and text parsing (preferred)
  • SSE2: Processes 16 bytes at once (fallback)
  • Scalar fallback: Byte-by-byte processing for non-x86 architectures

Comparison with Opteryx

On 50-column datasets, rugo is 2.7-5.6x faster than Opteryx 0.25.1 (release):

  • Full read: 2.7-3.1x faster
  • Projection (10 cols): 3.8-5.4x faster
  • Projection (5 cols): 3.9-5.6x faster

Note: These benchmarks compare against Opteryx 0.25.1 (PyPI release) which uses a Python-based decoder with csimdjson. The main branch (0.26.0+) includes a new Cython-based fast decoder with SIMD optimizations that is expected to be significantly faster.

rugo's advantages:

  • True projection pushdown: Only parse columns you need
  • Memory-based: No file I/O overhead
  • Zero-copy design: Direct memory-to-column conversion
  • Consistent performance: Maintains throughput across dataset sizes

See PERFORMANCE_COMPARISON.md for detailed benchmark results, JSONL_SIMD_OPTIMIZATIONS.md for SIMD optimization details, and OPTERYX_DECODER_ANALYSIS.md for a technical analysis of Opteryx's Cython decoder and potential improvements.

See examples/read_jsonl.py and benchmarks/compare_opteryx_performance.py for complete demonstrations.

Optional Orso conversion

Install the optional extra (pip install rugo[orso]) to enable Orso helpers:

from rugo.converters.orso import extract_schema_only, rugo_to_orso_schema, jsonl_to_orso_schema

# Parquet to Orso
metadata = parquet_meta.read_metadata("example.parquet")
relation = rugo_to_orso_schema(metadata, "example_table")
schema_info = extract_schema_only(metadata)

# JSON Lines to Orso
import rugo.jsonl as rj
jsonl_schema = rj.get_jsonl_schema(data)
relation = jsonl_to_orso_schema(jsonl_schema, "jsonl_table")

See examples/orso_conversion.py and examples/jsonl_orso_conversion.py for complete walkthroughs.

Development

make update     # install build and test tooling (uses uv under the hood)
make compile    # rebuild the Cython extension with -O3 and C++17 flags
make test       # run pytest-based validation (includes PyArrow comparisons)
make lint       # run ruff, isort, pycln, cython-lint
make mypy       # type checking

make compile clears previous build artefacts before rebuilding the extension in-place.

Project layout

rugo/
├── rugo/__init__.py
├── rugo/parquet/
│   ├── parquet_reader.pyx
│   ├── parquet_reader.pxd
│   ├── parquet_reader.cpp
│   ├── metadata.cpp
│   ├── metadata.hpp
│   ├── bloom_filter.cpp
│   ├── decode.cpp
│   ├── decode.hpp
│   ├── compression.cpp
│   ├── compression.hpp
│   ├── thrift.hpp
│   └── vendor/
├── rugo/jsonl_src/
│   ├── jsonl.pyx
│   ├── jsonl.pxd
│   ├── jsonl_reader.cpp
│   └── jsonl_reader.hpp
├── rugo/converters/orso.py
├── examples/
│   ├── read_parquet_metadata.py
│   ├── read_parquet_data.py
│   ├── read_jsonl.py
│   ├── jsonl_orso_conversion.py
│   ├── create_test_file.py
│   └── orso_conversion.py
├── scripts/
│   ├── generate_test_parquet.py
│   └── vendor_compression_libs.py
├── tests/
│   ├── data/
│   ├── test_all_metadata_fields.py
│   ├── test_bloom_filter.py
│   ├── test_decode.py
│   ├── test_jsonl.py
│   ├── test_jsonl_performance.py
│   ├── test_logical_types.py
│   ├── test_orso_converter.py
│   ├── test_statistics.py
│   └── requirements.txt
├── Makefile
├── pyproject.toml
├── setup.py
└── README.md

Status and limitations

  • Active development status (alpha); APIs are evolving and may change between releases.
  • Parquet: Metadata APIs are largely stable. The column-reading API is experimental and will change.
  • JSON Lines: High-performance reader with SIMD optimizations (19% improvement) and basic type support (int64, double, string, boolean).
  • Requires a C++17 compiler when installing from source or editing the Cython bindings.
  • SIMD optimizations (AVX2/SSE2) are automatically enabled on x86-64 platforms.
  • Bloom filter information is exposed via offsets and lengths; higher-level helpers are planned.

License

Licensed under the Apache License 2.0. See LICENSE for full terms.

Maintainer

Created and maintained by Justin Joyce (@joocer). Contributions are welcome via issues and pull requests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rugo-0.1.14.tar.gz (448.5 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

rugo-0.1.14-cp312-cp312-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.12musllinux: musl 1.1+ x86-64

rugo-0.1.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

rugo-0.1.14-cp312-cp312-macosx_11_0_arm64.whl (354.7 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

rugo-0.1.14-cp311-cp311-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.11musllinux: musl 1.1+ x86-64

rugo-0.1.14-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

rugo-0.1.14-cp311-cp311-macosx_11_0_arm64.whl (354.1 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

rugo-0.1.14-cp310-cp310-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.10musllinux: musl 1.1+ x86-64

rugo-0.1.14-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

rugo-0.1.14-cp310-cp310-macosx_11_0_arm64.whl (351.4 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

rugo-0.1.14-cp39-cp39-musllinux_1_1_x86_64.whl (4.3 MB view details)

Uploaded CPython 3.9musllinux: musl 1.1+ x86-64

rugo-0.1.14-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

rugo-0.1.14-cp39-cp39-macosx_11_0_arm64.whl (352.2 kB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

File details

Details for the file rugo-0.1.14.tar.gz.

File metadata

  • Download URL: rugo-0.1.14.tar.gz
  • Upload date:
  • Size: 448.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.14.tar.gz
Algorithm Hash digest
SHA256 b609fb3a84ff89a0b2f1558c3d93658c8d1c4392d1665b82e8087dca0d8d3ebf
MD5 59940b110eab34560a9ef275f042fef2
BLAKE2b-256 d739152f68d84b4fb58856ce81a5c4122ef5f60051d3ade24c9863077858ab4c

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14.tar.gz:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp312-cp312-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp312-cp312-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 be51eb175332ed279f9f4bf8534b62b1d6f395cf979e5a83fcb46f38a53eb7f2
MD5 cae34e9109cca2284090d7dcee55c7a1
BLAKE2b-256 e56a5bf19175c6ea8bba8804e7fccf5ecc48c8de06f9555942c491d5ba22d933

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp312-cp312-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d2d7be62d135dbee4b0dd51d8ac33f2e38f4a324c0e8bb743d12c9f7a381ad00
MD5 0a9a164b573ac93bdc2ceb5af1ed0dc1
BLAKE2b-256 eae6e42c6fdfbc4b228f2e965a8d833fcfdae60561fc975cd983d53c25ee01b7

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fddec2ed2b6266bd901469ba350c7435c59c07fe9eb6189c6bf6ad50dad65831
MD5 78cdc8ed403237c56826006d75a1edd3
BLAKE2b-256 4ee2bb930b6ac3eacf3e2813b8626b3104f44b1370653ca102c5467ed8348b87

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp311-cp311-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp311-cp311-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 268e2f8a7545eea9d904dd6409f8818e7a0ab2699da2f9c4f077f84e2216c4e7
MD5 ea490ecf29081ab627c7c5c7d74cacc7
BLAKE2b-256 d21a63e67bcc7f8c9d731e3e33f22f51e88584b092f802165b0c1a636f5e95a9

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp311-cp311-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ffc2976ad0ad01b192e2c9eebc74d3c7bc3de31b9b58204e2e45d80419c226dd
MD5 fb37ba7dcc959d003e8199cbb23a1d10
BLAKE2b-256 5adf0ad8ab483b69d06d91f6ee5a75660826b6967207efc9cafd1a21a545ea94

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ced88a6b14f644040093977c0378c4c17d06ec631c1dad706cd5abec8f634238
MD5 25fbe4dff016d2fd37babe0ddd316085
BLAKE2b-256 8b6cfb965ee7c9703d084ad14fb7b1612cb6e4c90ca9c1d1a819a6c1658241f5

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp310-cp310-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp310-cp310-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 66993754ea508981c9192ce9b6f171da3e40b98ad3aa3c744bb0516c2829c32b
MD5 76b4201dc9c74cc0d017e93cd5ee54eb
BLAKE2b-256 328356758a8d6062747a32ee4f3d7cec13659b0218ad6b944021eeee219f1f75

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp310-cp310-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2175e59f404fa2c2aa7246092195f8afdf594893f12a08d2ae41f48331ee7607
MD5 0e10e1d36e12a033e6a6d00787976411
BLAKE2b-256 6cc8d430f1f5b2487b9f11125cbf32c4c94ed4fc5f4b54c1b18c4d6c65cff65b

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 1b39bdcf8154571d69d03de060f9f69399d3a2a910654b4bfd8920f6501b1fbd
MD5 a42f610fd9126621231d54d7f7cd5d76
BLAKE2b-256 c114339306607f48bc4a3ed57d0c10ad596d21000d0673a5bab0acc247d9ffbc

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp310-cp310-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp39-cp39-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp39-cp39-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 a5819e6935b24d703ad38a6e4e88e544fc698ca1acd9a447e651f57632550bcf
MD5 0eda38951b7d25741fbbfff25f7dd139
BLAKE2b-256 44ac00e70da8fccab0294d02b974ba3de60e45052f45e09e5fd1415ab736d7e0

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp39-cp39-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.14-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d44dbcf6a69d7be2bb50f2404078432e34dcdbf86ce959f567979a458895fcac
MD5 cdc379848aa64e83cf75ada78d4e1c0c
BLAKE2b-256 124446643719fcc266c21871f48254c49235f307275fc08c3a497000763f57bc

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.14-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

  • Download URL: rugo-0.1.14-cp39-cp39-macosx_11_0_arm64.whl
  • Upload date:
  • Size: 352.2 kB
  • Tags: CPython 3.9, macOS 11.0+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.14-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 bd2eb5db19a6d6a2ff184b7a9739ad9721624a8811647c99186682adfc0751a6
MD5 d0fd93648a610f956b5e5ac698816d17
BLAKE2b-256 ee150be8a0681582e5f71d29cee6b9358354730ece0d6a49467bb3b7c1d4e07a

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.14-cp39-cp39-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page