Skip to main content

Parquet Metadata Reader

Reason this release was yanked:

missing vendored dependency

Project description

rugo

License Python Version PyPI Downloads

rugo is a C++17 and Cython powered file reader for Python. It delivers high-throughput reading for both Parquet files (metadata inspection and experimental column reader) and JSON Lines files (with schema inference, projection pushdown, and SIMD optimizations). The data-reading API is evolving rapidly and will change in upcoming releases.

Key Features

  • Parquet: Fast metadata extraction backed by an optimized C++17 parser and thin Python bindings.
  • Parquet: Complete schema and row-group details, including encodings, codecs, offsets, bloom filter pointers, and custom key/value metadata.
  • Parquet: Experimental memory-based data reading for PLAIN and RLE_DICTIONARY encoded columns with UNCOMPRESSED, SNAPPY, and ZSTD codecs.
  • JSON Lines: High-performance columnar reader with schema inference, projection pushdown, and SIMD optimizations (19% faster).
  • JSON Lines: Memory-based processing for zero-copy parsing.
  • Works with file paths, byte strings, and contiguous memoryviews.
  • Optional schema conversion helpers for Orso.
  • No runtime dependencies beyond the Python standard library.

Installation

PyPI

pip install rugo

# Optional extras
pip install rugo[orso]
pip install rugo[dev]

From source

git clone https://github.com/mabel-dev/rugo.git
cd rugo
python -m venv .venv
source .venv/bin/activate
make update
make compile
pip install -e .

Requirements

  • Python 3.9 or newer
  • A C++17 compatible compiler (clang, gcc, or MSVC)
  • Cython and setuptools for source builds (installed by the commands above)
  • On x86-64 platforms, an assembler capable of compiling .S sources (bundled with modern GCC/Clang toolchains)
  • ARM/AArch64 platforms (including Apple Silicon) are fully supported with NEON SIMD optimizations

Quickstart

import rugo.parquet as parquet_meta

metadata = parquet_meta.read_metadata("example.parquet")

print(f"Rows: {metadata['num_rows']}")
print("Schema columns:")
for column in metadata["schema_columns"]:
    print(f"  {column['name']}: {column['physical_type']} ({column['logical_type']})")

first_row_group = metadata["row_groups"][0]
for column in first_row_group["columns"]:
    print(
        f"{column['name']}: codec={column['compression_codec']}, "
        f"nulls={column['null_count']}, range=({column['min']}, {column['max']})"
    )

read_metadata returns dictionaries composed of Python primitives, ready for JSON serialisation or downstream processing.

Returned metadata layout

{
    "num_rows": int,
    "schema_columns": [
        {
            "name": str,
            "physical_type": str,
            "logical_type": str,
            "nullable": bool,
        },
        ...
    ],
    "row_groups": [
        {
            "num_rows": int,
            "total_byte_size": int,
            "columns": [
                {
                    "name": str,
                    "path_in_schema": str,
                    "physical_type": str,
                    "logical_type": str,
                    "num_values": Optional[int],
                    "total_uncompressed_size": Optional[int],
                    "total_compressed_size": Optional[int],
                    "data_page_offset": Optional[int],
                    "index_page_offset": Optional[int],
                    "dictionary_page_offset": Optional[int],
                    "min": Any,
                    "max": Any,
                    "null_count": Optional[int],
                    "distinct_count": Optional[int],
                    "bloom_offset": Optional[int],
                    "bloom_length": Optional[int],
                    "encodings": List[str],
                    "compression_codec": Optional[str],
                    "key_value_metadata": Optional[Dict[str, str]],
                },
                ...
            ],
        },
        ...
    ],
}

Fields that are not present in the source Parquet file are reported as None. Minimum and maximum values are decoded into Python types when possible; otherwise hexadecimal strings are returned.

Parsing options

All entry points share the same keyword arguments:

  • schema_only (default False): return only the top-level schema without row group details.
  • include_statistics (default True): skip min/max/num_values decoding when set to False.
  • max_row_groups (default -1): limit the number of row groups inspected; handy for very large files.
metadata = parquet_meta.read_metadata(
    "large_file.parquet",
    schema_only=False,
    include_statistics=False,
    max_row_groups=2,
)

Working with in-memory data

with open("example.parquet", "rb") as fh:
    data = fh.read()

from_bytes = parquet_meta.read_metadata_from_bytes(data)
from_view = parquet_meta.read_metadata_from_memoryview(memoryview(data))

read_metadata_from_memoryview performs zero-copy parsing when given a contiguous buffer.

Prototype Data Decoding (Experimental)

API stability: The column-reading functions are experimental and will change without notice while we expand format coverage.

rugo includes a prototype decoder for reading actual column data from Parquet files. This is a limited, experimental feature designed for simple use cases and testing.

Supported Features

  • ✅ UNCOMPRESSED, SNAPPY, and ZSTD codecs
  • ✅ PLAIN encoding
  • ✅ RLE_DICTIONARY encoding
  • int32, int64, float32, float64, boolean, and string (byte_array) types
  • ✅ Memory-based processing (load once, decode multiple times)
  • ✅ Column selection (decode only the columns you need)
  • ✅ Multi-row-group support

Unsupported Features

  • ❌ Other codecs (GZIP, LZ4, LZO, BROTLI, etc.)
  • ❌ Delta encoding, PLAIN_DICTIONARY, other advanced encodings
  • ❌ Nullable columns with definition levels > 0
  • ❌ Other types (int96, fixed_len_byte_array, date, timestamp, complex types)
  • ❌ Nested structures (lists, maps, structs)

Primary API: Memory-Based Reading

The recommended approach loads Parquet data into memory once and performs all operations on the in-memory buffer:

import rugo.parquet as rp

# Load file into memory once
with open("data.parquet", "rb") as f:
    parquet_data = f.read()

# Check if the data can be decoded
if rp.can_decode_from_memory(parquet_data):
    
    # Read ALL columns from all row groups
    table = rp.read_parquet(parquet_data)
    
    # Or read SPECIFIC columns only
    table = rp.read_parquet(parquet_data, ["name", "age", "salary"])
    
    # Access the structured data
    print(f"Columns: {table['column_names']}")
    print(f"Row groups: {len(table['row_groups'])}")
    
    # Iterate through row groups and columns
    for rg_idx, row_group in enumerate(table['row_groups']):
        print(f"Row group {rg_idx}:")
        for col_idx, column_data in enumerate(row_group):
            col_name = table['column_names'][col_idx]
            if column_data is not None:
                print(f"  {col_name}: {len(column_data)} values")
            else:
                print(f"  {col_name}: Failed to decode")

Data Structure

The read_parquet() function returns a dictionary with this structure:

{
    'success': bool,                    # True if reading succeeded
    'column_names': ['col1', 'col2'],   # List of column names
    'row_groups': [                     # List of row groups
        [col1_data, col2_data],         # Row group 0: list of columns
        [col1_data, col2_data],         # Row group 1: list of columns
        # ... more row groups
    ]
}

Each column's data is a Python list containing the decoded values.

Performance Benefits

Traditional Approach (Multiple File I/O):

# Each operation reads the file separately
metadata = rp.read_metadata("file.parquet")       # File I/O #1
col1 = rp.decode_column("file.parquet", "col1")   # File I/O #2  
col2 = rp.decode_column("file.parquet", "col2")   # File I/O #3

Memory-Based Approach (Single File I/O):

# Load once, process multiple times
with open("file.parquet", "rb") as f:
    data = f.read()  # File I/O #1 (only)

table = rp.read_parquet(data, ["col1", "col2"])   # In-memory processing

Legacy File-Based API

For backward compatibility, file-based functions are still available:

# Check if a file can be decoded
if rp.can_decode("data.parquet"):
    # Decode a specific column from first row group only
    values = rp.decode_column("data.parquet", "column_name")
    print(values)  # e.g., [1, 2, 3, 4, 5] or ['a', 'b', 'c']

Use Cases

The memory-based API is optimized for:

  • Query engines with metadata-driven pruning
  • ETL pipelines processing multiple Parquet files
  • Data exploration where you need to examine various columns
  • High-performance scenarios minimizing I/O operations

See examples/memory_based_api_example.py and examples/optional_columns_example.py for complete demonstrations.

Note: This decoder is a prototype for educational and testing purposes. For production use with full Parquet support, use PyArrow or FastParquet.

JSON Lines Reading

rugo includes a high-performance JSON Lines reader with schema inference, projection pushdown, and SIMD optimizations.

Features

  • ✅ Fast columnar reading with C++17 implementation and SIMD optimizations
  • 19% performance improvement from SIMD optimizations (AVX2/SSE2)
  • ✅ Automatic schema inference from JSON data
  • ✅ Projection pushdown (read only needed columns)
  • ✅ Support for int64, double, string, and boolean types
  • ✅ Native null value handling
  • ✅ Memory-based processing (zero-copy parsing)
  • ✅ Orso schema conversion

Quick Example

import rugo.jsonl as rj

# Sample JSON Lines data
data = b'''{"id": 1, "name": "Alice", "age": 30, "salary": 50000.0}
{"id": 2, "name": "Bob", "age": 25, "salary": 45000.0}
{"id": 3, "name": "Charlie", "age": 35, "salary": 55000.0}'''

# Get schema
schema = rj.get_jsonl_schema(data)
print(f"Columns: {[col['name'] for col in schema]}")
# Output: Columns: ['id', 'name', 'age', 'salary']

# Read all columns
result = rj.read_jsonl(data)
print(f"Read {result['num_rows']} rows with {len(result['columns'])} columns")

# Read with projection (only specific columns)
result = rj.read_jsonl(data, columns=['name', 'salary'])
# Only reads 'name' and 'salary' - projection pushdown!

Working with Files

import rugo.jsonl as rj

# Load file into memory
with open("data.jsonl", "rb") as f:
    jsonl_data = f.read()

# Extract schema
schema = rj.get_jsonl_schema(jsonl_data, sample_size=1000)

# Read specific columns only
result = rj.read_jsonl(jsonl_data, columns=['user_id', 'email', 'score'])

# Access columnar data
for i in range(result['num_rows']):
    user_id = result['columns'][0][i]
    email = result['columns'][1][i]
    score = result['columns'][2][i]
    print(f"User {user_id}: {email} - Score: {score}")

Orso Integration

import rugo.jsonl as rj
from rugo.converters.orso import jsonl_to_orso_schema

# Get JSON Lines schema
jsonl_schema = rj.get_jsonl_schema(data)

# Convert to Orso schema
orso_schema = jsonl_to_orso_schema(jsonl_schema, schema_name="my_table")
print(f"Schema: {orso_schema.name}")
for col in orso_schema.columns:
    print(f"  {col.name}: {col.type}")

Performance

The JSON Lines reader achieves approximately 109K-201K rows/second on wide tables (50 columns), with higher throughput on narrower tables. With SIMD optimizations (AVX2/SSE2), the reader delivers:

  • Full read (50 cols): ~109K rows/second
  • Projection (10 cols): ~174-191K rows/second
  • Projection (5 cols): ~181-201K rows/second
  • Performance improvement: 19% faster with SIMD optimizations

The SIMD implementation uses:

  • AVX2: Processes 32 bytes at once for newline detection and text parsing (preferred)
  • SSE2: Processes 16 bytes at once (fallback)
  • Scalar fallback: Byte-by-byte processing for non-x86 architectures

Comparison with Opteryx

On 50-column datasets, rugo is 2.7-5.6x faster than Opteryx 0.25.1 (release):

  • Full read: 2.7-3.1x faster
  • Projection (10 cols): 3.8-5.4x faster
  • Projection (5 cols): 3.9-5.6x faster

Note: These benchmarks compare against Opteryx 0.25.1 (PyPI release) which uses a Python-based decoder with csimdjson. The main branch (0.26.0+) includes a new Cython-based fast decoder with SIMD optimizations that is expected to be significantly faster.

rugo's advantages:

  • True projection pushdown: Only parse columns you need
  • Memory-based: No file I/O overhead
  • Zero-copy design: Direct memory-to-column conversion
  • Consistent performance: Maintains throughput across dataset sizes

See PERFORMANCE_COMPARISON.md for detailed benchmark results, JSONL_SIMD_OPTIMIZATIONS.md for SIMD optimization details, and OPTERYX_DECODER_ANALYSIS.md for a technical analysis of Opteryx's Cython decoder and potential improvements.

See examples/read_jsonl.py and benchmarks/compare_opteryx_performance.py for complete demonstrations.

Optional Orso conversion

Install the optional extra (pip install rugo[orso]) to enable Orso helpers:

from rugo.converters.orso import extract_schema_only, rugo_to_orso_schema, jsonl_to_orso_schema

# Parquet to Orso
metadata = parquet_meta.read_metadata("example.parquet")
relation = rugo_to_orso_schema(metadata, "example_table")
schema_info = extract_schema_only(metadata)

# JSON Lines to Orso
import rugo.jsonl as rj
jsonl_schema = rj.get_jsonl_schema(data)
relation = jsonl_to_orso_schema(jsonl_schema, "jsonl_table")

See examples/orso_conversion.py and examples/jsonl_orso_conversion.py for complete walkthroughs.

Development

make update     # install build and test tooling (uses uv under the hood)
make compile    # rebuild the Cython extension with -O3 and C++17 flags
make test       # run pytest-based validation (includes PyArrow comparisons)
make lint       # run ruff, isort, pycln, cython-lint
make mypy       # type checking

make compile clears previous build artefacts before rebuilding the extension in-place.

Project layout

rugo/
├── rugo/__init__.py
├── rugo/parquet/
│   ├── parquet_reader.pyx
│   ├── parquet_reader.pxd
│   ├── parquet_reader.cpp
│   ├── metadata.cpp
│   ├── metadata.hpp
│   ├── bloom_filter.cpp
│   ├── decode.cpp
│   ├── decode.hpp
│   ├── compression.cpp
│   ├── compression.hpp
│   ├── thrift.hpp
│   └── vendor/
├── rugo/jsonl_src/
│   ├── jsonl.pyx
│   ├── jsonl.pxd
│   ├── jsonl_reader.cpp
│   └── jsonl_reader.hpp
├── rugo/converters/orso.py
├── examples/
│   ├── read_parquet_metadata.py
│   ├── read_parquet_data.py
│   ├── read_jsonl.py
│   ├── jsonl_orso_conversion.py
│   ├── create_test_file.py
│   └── orso_conversion.py
├── scripts/
│   ├── generate_test_parquet.py
│   └── vendor_compression_libs.py
├── tests/
│   ├── data/
│   ├── test_all_metadata_fields.py
│   ├── test_bloom_filter.py
│   ├── test_decode.py
│   ├── test_jsonl.py
│   ├── test_jsonl_performance.py
│   ├── test_logical_types.py
│   ├── test_orso_converter.py
│   ├── test_statistics.py
│   └── requirements.txt
├── Makefile
├── pyproject.toml
├── setup.py
└── README.md

Status and limitations

  • Active development status (alpha); APIs are evolving and may change between releases.
  • Parquet: Metadata APIs are largely stable. The column-reading API is experimental and will change.
  • JSON Lines: High-performance reader with SIMD optimizations (19% improvement) and basic type support (int64, double, string, boolean).
  • Requires a C++17 compiler when installing from source or editing the Cython bindings.
  • SIMD optimizations (AVX2/SSE2) are automatically enabled on x86-64 platforms.
  • Bloom filter information is exposed via offsets and lengths; higher-level helpers are planned.

Future Format Support

rugo currently supports Parquet and JSON Lines. We are evaluating additional formats based on optimization opportunities and community demand. See FORMAT_SUPPORT_ANALYSIS.md for a detailed analysis of candidate formats (ORC, Avro, CSV, Arrow IPC) and FORMAT_ROADMAP.md for our planned roadmap. Feedback and feature requests are welcome!

License

Licensed under the Apache License 2.0. See LICENSE for full terms.

Maintainer

Created and maintained by Justin Joyce (@joocer). Contributions are welcome via issues and pull requests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rugo-0.1.15.tar.gz (546.2 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

rugo-0.1.15-cp312-cp312-musllinux_1_1_x86_64.whl (5.0 MB view details)

Uploaded CPython 3.12musllinux: musl 1.1+ x86-64

rugo-0.1.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

rugo-0.1.15-cp312-cp312-macosx_11_0_arm64.whl (418.7 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

rugo-0.1.15-cp311-cp311-musllinux_1_1_x86_64.whl (5.0 MB view details)

Uploaded CPython 3.11musllinux: musl 1.1+ x86-64

rugo-0.1.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.1 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

rugo-0.1.15-cp311-cp311-macosx_11_0_arm64.whl (417.8 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

rugo-0.1.15-cp310-cp310-musllinux_1_1_x86_64.whl (4.9 MB view details)

Uploaded CPython 3.10musllinux: musl 1.1+ x86-64

rugo-0.1.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.0 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

rugo-0.1.15-cp310-cp310-macosx_11_0_arm64.whl (414.1 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

rugo-0.1.15-cp39-cp39-musllinux_1_1_x86_64.whl (4.9 MB view details)

Uploaded CPython 3.9musllinux: musl 1.1+ x86-64

rugo-0.1.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.0 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

rugo-0.1.15-cp39-cp39-macosx_11_0_arm64.whl (414.9 kB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

File details

Details for the file rugo-0.1.15.tar.gz.

File metadata

  • Download URL: rugo-0.1.15.tar.gz
  • Upload date:
  • Size: 546.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.15.tar.gz
Algorithm Hash digest
SHA256 0d9b4c5ba3fd76001c6c2ae42683035b22556ec315526d09dba93e27246b72a1
MD5 d2b62b5835b13cf3c6c67e6ee4e66c60
BLAKE2b-256 6435e4c6b7ef519993e105fe8f9c666e5e0d2b64be5fb747ffb6a094c730463a

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15.tar.gz:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp312-cp312-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp312-cp312-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 9bdff1e3cab61a53d806f6fda7a7109228364aeefb56593c8a938f332b3e63d5
MD5 bed20c4fa69eba5d601f726ed9f4e6c8
BLAKE2b-256 fc9346a8cfb516d663cb9be048e978c741b88cd924788cee7c639880062f5005

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp312-cp312-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d90578aeb33f176c81cbcf4782572b858ab0ff467ad8d29d4e3d6f14e58dc593
MD5 44ed323a0f993681b6f0b7ae61600d06
BLAKE2b-256 e013468543604d8a7016af1d1f98524d323e8a0346cd978eb79fa7d0c9367497

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3fba764c3f5216bec9c0fd7df96ba9afbab974f75a15b8771fd2d3473efe2696
MD5 89bd5f66b4ef899345085884a9c48193
BLAKE2b-256 0b902b1f722b22d77e9d862643f46af108cc81bd54c3fa475ad2f3304a1262ce

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp311-cp311-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp311-cp311-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 87d934c6b59c0ba14fa3c18050c26bd9f8d8e7dd7a16b2cdeff341c02f1e4137
MD5 715f64a3b7473b653b641bb9c4c207ba
BLAKE2b-256 bfc13b0c393f50eceb0145310cd805e58911f75754e32e43009f020d58a740c8

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp311-cp311-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9bb618ca6357b715dc3715890db5b9c58cc66f3ee16eac57a17e378d8f2e599f
MD5 7f986ce24573a04957e3519abe4f369a
BLAKE2b-256 80504cbc205e6bbfbb8a0a3dc2d490de42b019d99fd24dce497e203994e8d004

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 c267634fe650b008b668e3261d988341439b69ade90d7e2234d89168e78b855a
MD5 2cd1b38d94c071835c97cee8c1139879
BLAKE2b-256 6bad2690036d9617aef090ab72db0ed52692ae54edf139f54eecd325241cbb1e

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp310-cp310-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp310-cp310-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 245e79cf757f7f0dadc026463876949f44aa48bb8749e9d7e8630b7978601df8
MD5 05de88c5c5ff5d2d9a2dfc94267cd334
BLAKE2b-256 3772fbcb4caac53d148500bfa202421324c769651800be0ca67efe231368d670

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp310-cp310-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3b15a08c80274cc2f9872f40265d9e581e7ddeff46e6be958f7009f2eece9e96
MD5 af329a93d02fe4bbf131b024d905e84c
BLAKE2b-256 7c603ba3993d6531a6c77474d62a267f7daa7c778a1cd6ab7a3494d7102ded7a

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 68110a414ac3a2d58947100de30098169a8955343c85ec16ff6cae7cf2ef1b3d
MD5 1a8ea9588c149b6aa05dd78168f9d630
BLAKE2b-256 5c2b3f8a7114ad1c7e2c2a323e1bcc92bf9cf1f3813548652f5b620e484a8dd1

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp310-cp310-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp39-cp39-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp39-cp39-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 5c537ee0d2d47f78d7c5447b2eb6801c75672359d1ad3f0374a03ee3eba14ef9
MD5 63f60081d08b1240143cd5906bfdbad0
BLAKE2b-256 5b94146da3d488a708d98ee1c90dda91eefb09963898eba0760a39a54dfc7296

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp39-cp39-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9ebd1badf71453e798c01f6eb685b2becd49bb3781a99c54cc9d5bdbe53a4027
MD5 dbfffca0ae0d4b527029da08de4c27a0
BLAKE2b-256 122333b04981fcada647ed698a4c66ca0a83cbe7447aa1ff9dbed9b86db112e0

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.15-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

  • Download URL: rugo-0.1.15-cp39-cp39-macosx_11_0_arm64.whl
  • Upload date:
  • Size: 414.9 kB
  • Tags: CPython 3.9, macOS 11.0+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.15-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 d656e9031a0edff75bb096151cb705060846dc3dd954b6a6a065cb07e621255b
MD5 d1fa23e913924ed144239b11288b4614
BLAKE2b-256 57c530ca919d17e0218752ef97f948a8c3e2f68e96d121a3591fb96ae564c597

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.15-cp39-cp39-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page