Skip to main content

Parquet Metadata Reader

Project description

rugo

License Python Version PyPI Downloads

rugo is a C++17 and Cython powered file reader for Python. It delivers high-throughput reading for both Parquet files (metadata inspection and experimental column reader) and JSON Lines files (with schema inference, projection pushdown, and SIMD optimizations). The data-reading API is evolving rapidly and will change in upcoming releases.

Key Features

  • Parquet: Fast metadata extraction backed by an optimized C++17 parser and thin Python bindings.
  • Parquet: Complete schema and row-group details, including encodings, codecs, offsets, bloom filter pointers, and custom key/value metadata.
  • Parquet: Experimental memory-based data reading for PLAIN and RLE_DICTIONARY encoded columns with UNCOMPRESSED, SNAPPY, and ZSTD codecs.
  • JSON Lines: High-performance columnar reader with schema inference, projection pushdown, and SIMD optimizations (19% faster).
  • JSON Lines: Memory-based processing for zero-copy parsing.
  • Works with file paths, byte strings, and contiguous memoryviews.
  • Optional schema conversion helpers for Orso.
  • No runtime dependencies beyond the Python standard library.

Installation

PyPI

pip install rugo

# Optional extras
pip install rugo[orso]
pip install rugo[dev]

From source

git clone https://github.com/mabel-dev/rugo.git
cd rugo
python -m venv .venv
source .venv/bin/activate
make update
make compile
pip install -e .

Requirements

  • Python 3.9 or newer
  • A C++17 compatible compiler (clang, gcc, or MSVC)
  • Cython and setuptools for source builds (installed by the commands above)
  • On x86-64 platforms, an assembler capable of compiling .S sources (bundled with modern GCC/Clang toolchains)
  • ARM/AArch64 platforms (including Apple Silicon) are fully supported with NEON SIMD optimizations

Quickstart

import rugo.parquet as parquet_meta

metadata = parquet_meta.read_metadata("example.parquet")

print(f"Rows: {metadata['num_rows']}")
print("Schema columns:")
for column in metadata["schema_columns"]:
    print(f"  {column['name']}: {column['physical_type']} ({column['logical_type']})")

first_row_group = metadata["row_groups"][0]
for column in first_row_group["columns"]:
    print(
        f"{column['name']}: codec={column['compression_codec']}, "
        f"nulls={column['null_count']}, range=({column['min']}, {column['max']})"
    )

read_metadata returns dictionaries composed of Python primitives, ready for JSON serialisation or downstream processing.

Returned metadata layout

{
    "num_rows": int,
    "schema_columns": [
        {
            "name": str,
            "physical_type": str,
            "logical_type": str,
            "nullable": bool,
        },
        ...
    ],
    "row_groups": [
        {
            "num_rows": int,
            "total_byte_size": int,
            "columns": [
                {
                    "name": str,
                    "path_in_schema": str,
                    "physical_type": str,
                    "logical_type": str,
                    "num_values": Optional[int],
                    "total_uncompressed_size": Optional[int],
                    "total_compressed_size": Optional[int],
                    "data_page_offset": Optional[int],
                    "index_page_offset": Optional[int],
                    "dictionary_page_offset": Optional[int],
                    "min": Any,
                    "max": Any,
                    "null_count": Optional[int],
                    "distinct_count": Optional[int],
                    "bloom_offset": Optional[int],
                    "bloom_length": Optional[int],
                    "encodings": List[str],
                    "compression_codec": Optional[str],
                    "key_value_metadata": Optional[Dict[str, str]],
                },
                ...
            ],
        },
        ...
    ],
}

Fields that are not present in the source Parquet file are reported as None. Minimum and maximum values are decoded into Python types when possible; otherwise hexadecimal strings are returned.

Parsing options

All entry points share the same keyword arguments:

  • schema_only (default False): return only the top-level schema without row group details.
  • include_statistics (default True): skip min/max/num_values decoding when set to False.
  • max_row_groups (default -1): limit the number of row groups inspected; handy for very large files.
metadata = parquet_meta.read_metadata(
    "large_file.parquet",
    schema_only=False,
    include_statistics=False,
    max_row_groups=2,
)

Working with in-memory data

with open("example.parquet", "rb") as fh:
    data = fh.read()

from_bytes = parquet_meta.read_metadata_from_bytes(data)
from_view = parquet_meta.read_metadata_from_memoryview(memoryview(data))

read_metadata_from_memoryview performs zero-copy parsing when given a contiguous buffer.

Prototype Data Decoding (Experimental)

API stability: The column-reading functions are experimental and will change without notice while we expand format coverage.

rugo includes a prototype decoder for reading actual column data from Parquet files. This is a limited, experimental feature designed for simple use cases and testing.

Supported Features

  • ✅ UNCOMPRESSED, SNAPPY, and ZSTD codecs
  • ✅ PLAIN encoding
  • ✅ RLE_DICTIONARY encoding
  • int32, int64, float32, float64, boolean, and string (byte_array) types
  • ✅ Memory-based processing (load once, decode multiple times)
  • ✅ Column selection (decode only the columns you need)
  • ✅ Multi-row-group support

Unsupported Features

  • ❌ Other codecs (GZIP, LZ4, LZO, BROTLI, etc.)
  • ❌ Delta encoding, PLAIN_DICTIONARY, other advanced encodings
  • ❌ Nullable columns with definition levels > 0
  • ❌ Other types (int96, fixed_len_byte_array, date, timestamp, complex types)
  • ❌ Nested structures (lists, maps, structs)

Primary API: Memory-Based Reading

The recommended approach loads Parquet data into memory once and performs all operations on the in-memory buffer:

import rugo.parquet as rp

# Load file into memory once
with open("data.parquet", "rb") as f:
    parquet_data = f.read()

# Check if the data can be decoded
if rp.can_decode_from_memory(parquet_data):
    
    # Read ALL columns from all row groups
    table = rp.read_parquet(parquet_data)
    
    # Or read SPECIFIC columns only
    table = rp.read_parquet(parquet_data, ["name", "age", "salary"])
    
    # Access the structured data
    print(f"Columns: {table['column_names']}")
    print(f"Row groups: {len(table['row_groups'])}")
    
    # Iterate through row groups and columns
    for rg_idx, row_group in enumerate(table['row_groups']):
        print(f"Row group {rg_idx}:")
        for col_idx, column_data in enumerate(row_group):
            col_name = table['column_names'][col_idx]
            if column_data is not None:
                print(f"  {col_name}: {len(column_data)} values")
            else:
                print(f"  {col_name}: Failed to decode")

Data Structure

The read_parquet() function returns a dictionary with this structure:

{
    'success': bool,                    # True if reading succeeded
    'column_names': ['col1', 'col2'],   # List of column names
    'row_groups': [                     # List of row groups
        [col1_data, col2_data],         # Row group 0: list of columns
        [col1_data, col2_data],         # Row group 1: list of columns
        # ... more row groups
    ]
}

Each column's data is a Python list containing the decoded values.

Performance Benefits

Traditional Approach (Multiple File I/O):

# Each operation reads the file separately
metadata = rp.read_metadata("file.parquet")       # File I/O #1
col1 = rp.decode_column("file.parquet", "col1")   # File I/O #2  
col2 = rp.decode_column("file.parquet", "col2")   # File I/O #3

Memory-Based Approach (Single File I/O):

# Load once, process multiple times
with open("file.parquet", "rb") as f:
    data = f.read()  # File I/O #1 (only)

table = rp.read_parquet(data, ["col1", "col2"])   # In-memory processing

Legacy File-Based API

For backward compatibility, file-based functions are still available:

# Check if a file can be decoded
if rp.can_decode("data.parquet"):
    # Decode a specific column from first row group only
    values = rp.decode_column("data.parquet", "column_name")
    print(values)  # e.g., [1, 2, 3, 4, 5] or ['a', 'b', 'c']

Use Cases

The memory-based API is optimized for:

  • Query engines with metadata-driven pruning
  • ETL pipelines processing multiple Parquet files
  • Data exploration where you need to examine various columns
  • High-performance scenarios minimizing I/O operations

See examples/memory_based_api_example.py and examples/optional_columns_example.py for complete demonstrations.

Note: This decoder is a prototype for educational and testing purposes. For production use with full Parquet support, use PyArrow or FastParquet.

JSON Lines Reading

rugo includes a high-performance JSON Lines reader with schema inference, projection pushdown, and SIMD optimizations.

Features

  • ✅ Fast columnar reading with C++17 implementation and SIMD optimizations
  • 19% performance improvement from SIMD optimizations (AVX2/SSE2)
  • ✅ Automatic schema inference from JSON data
  • ✅ Projection pushdown (read only needed columns)
  • ✅ Support for int64, double, string, and boolean types
  • ✅ Native null value handling
  • ✅ Memory-based processing (zero-copy parsing)
  • ✅ Orso schema conversion

Quick Example

import rugo.jsonl as rj

# Sample JSON Lines data
data = b'''{"id": 1, "name": "Alice", "age": 30, "salary": 50000.0}
{"id": 2, "name": "Bob", "age": 25, "salary": 45000.0}
{"id": 3, "name": "Charlie", "age": 35, "salary": 55000.0}'''

# Get schema
schema = rj.get_jsonl_schema(data)
print(f"Columns: {[col['name'] for col in schema]}")
# Output: Columns: ['id', 'name', 'age', 'salary']

# Read all columns
result = rj.read_jsonl(data)
print(f"Read {result['num_rows']} rows with {len(result['columns'])} columns")

# Read with projection (only specific columns)
result = rj.read_jsonl(data, columns=['name', 'salary'])
# Only reads 'name' and 'salary' - projection pushdown!

Working with Files

import rugo.jsonl as rj

# Load file into memory
with open("data.jsonl", "rb") as f:
    jsonl_data = f.read()

# Extract schema
schema = rj.get_jsonl_schema(jsonl_data, sample_size=1000)

# Read specific columns only
result = rj.read_jsonl(jsonl_data, columns=['user_id', 'email', 'score'])

# Access columnar data
for i in range(result['num_rows']):
    user_id = result['columns'][0][i]
    email = result['columns'][1][i]
    score = result['columns'][2][i]
    print(f"User {user_id}: {email} - Score: {score}")

Orso Integration

import rugo.jsonl as rj
from rugo.converters.orso import jsonl_to_orso_schema

# Get JSON Lines schema
jsonl_schema = rj.get_jsonl_schema(data)

# Convert to Orso schema
orso_schema = jsonl_to_orso_schema(jsonl_schema, schema_name="my_table")
print(f"Schema: {orso_schema.name}")
for col in orso_schema.columns:
    print(f"  {col.name}: {col.type}")

Performance

The JSON Lines reader achieves approximately 109K-201K rows/second on wide tables (50 columns), with higher throughput on narrower tables. With SIMD optimizations (AVX2/SSE2), the reader delivers:

  • Full read (50 cols): ~109K rows/second
  • Projection (10 cols): ~174-191K rows/second
  • Projection (5 cols): ~181-201K rows/second
  • Performance improvement: 19% faster with SIMD optimizations

The SIMD implementation uses:

  • AVX2: Processes 32 bytes at once for newline detection and text parsing (preferred)
  • SSE2: Processes 16 bytes at once (fallback)
  • Scalar fallback: Byte-by-byte processing for non-x86 architectures

Comparison with Opteryx

On 50-column datasets, rugo is 2.7-5.6x faster than Opteryx 0.25.1 (release):

  • Full read: 2.7-3.1x faster
  • Projection (10 cols): 3.8-5.4x faster
  • Projection (5 cols): 3.9-5.6x faster

Note: These benchmarks compare against Opteryx 0.25.1 (PyPI release) which uses a Python-based decoder with csimdjson. The main branch (0.26.0+) includes a new Cython-based fast decoder with SIMD optimizations that is expected to be significantly faster.

rugo's advantages:

  • True projection pushdown: Only parse columns you need
  • Memory-based: No file I/O overhead
  • Zero-copy design: Direct memory-to-column conversion
  • Consistent performance: Maintains throughput across dataset sizes

See PERFORMANCE_COMPARISON.md for detailed benchmark results, JSONL_SIMD_OPTIMIZATIONS.md for SIMD optimization details, and OPTERYX_DECODER_ANALYSIS.md for a technical analysis of Opteryx's Cython decoder and potential improvements.

See examples/read_jsonl.py and benchmarks/compare_opteryx_performance.py for complete demonstrations.

Optional Orso conversion

Install the optional extra (pip install rugo[orso]) to enable Orso helpers:

from rugo.converters.orso import extract_schema_only, rugo_to_orso_schema, jsonl_to_orso_schema

# Parquet to Orso
metadata = parquet_meta.read_metadata("example.parquet")
relation = rugo_to_orso_schema(metadata, "example_table")
schema_info = extract_schema_only(metadata)

# JSON Lines to Orso
import rugo.jsonl as rj
jsonl_schema = rj.get_jsonl_schema(data)
relation = jsonl_to_orso_schema(jsonl_schema, "jsonl_table")

See examples/orso_conversion.py and examples/jsonl_orso_conversion.py for complete walkthroughs.

Development

make update     # install build and test tooling (uses uv under the hood)
make compile    # rebuild the Cython extension with -O3 and C++17 flags
make test       # run pytest-based validation (includes PyArrow comparisons)
make lint       # run ruff, isort, pycln, cython-lint
make mypy       # type checking

make compile clears previous build artefacts before rebuilding the extension in-place.

Project layout

rugo/
├── rugo/__init__.py
├── rugo/parquet/
│   ├── parquet_reader.pyx
│   ├── parquet_reader.pxd
│   ├── parquet_reader.cpp
│   ├── metadata.cpp
│   ├── metadata.hpp
│   ├── bloom_filter.cpp
│   ├── decode.cpp
│   ├── decode.hpp
│   ├── compression.cpp
│   ├── compression.hpp
│   ├── thrift.hpp
│   └── vendor/
├── rugo/jsonl_src/
│   ├── jsonl.pyx
│   ├── jsonl.pxd
│   ├── jsonl_reader.cpp
│   └── jsonl_reader.hpp
├── rugo/converters/orso.py
├── examples/
│   ├── read_parquet_metadata.py
│   ├── read_parquet_data.py
│   ├── read_jsonl.py
│   ├── jsonl_orso_conversion.py
│   ├── create_test_file.py
│   └── orso_conversion.py
├── scripts/
│   ├── generate_test_parquet.py
│   └── vendor_compression_libs.py
├── tests/
│   ├── data/
│   ├── test_all_metadata_fields.py
│   ├── test_bloom_filter.py
│   ├── test_decode.py
│   ├── test_jsonl.py
│   ├── test_jsonl_performance.py
│   ├── test_logical_types.py
│   ├── test_orso_converter.py
│   ├── test_statistics.py
│   └── requirements.txt
├── Makefile
├── pyproject.toml
├── setup.py
└── README.md

Status and limitations

  • Active development status (alpha); APIs are evolving and may change between releases.
  • Parquet: Metadata APIs are largely stable. The column-reading API is experimental and will change.
  • JSON Lines: High-performance reader with SIMD optimizations (19% improvement) and basic type support (int64, double, string, boolean).
  • Requires a C++17 compiler when installing from source or editing the Cython bindings.
  • SIMD optimizations (AVX2/SSE2) are automatically enabled on x86-64 platforms.
  • Bloom filter information is exposed via offsets and lengths; higher-level helpers are planned.

License

Licensed under the Apache License 2.0. See LICENSE for full terms.

Maintainer

Created and maintained by Justin Joyce (@joocer). Contributions are welcome via issues and pull requests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rugo-0.1.11.tar.gz (412.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

rugo-0.1.11-cp312-cp312-musllinux_1_1_x86_64.whl (4.1 MB view details)

Uploaded CPython 3.12musllinux: musl 1.1+ x86-64

rugo-0.1.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

rugo-0.1.11-cp312-cp312-macosx_11_0_arm64.whl (329.5 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

rugo-0.1.11-cp311-cp311-musllinux_1_1_x86_64.whl (4.1 MB view details)

Uploaded CPython 3.11musllinux: musl 1.1+ x86-64

rugo-0.1.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

rugo-0.1.11-cp311-cp311-macosx_11_0_arm64.whl (328.0 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

rugo-0.1.11-cp310-cp310-musllinux_1_1_x86_64.whl (4.1 MB view details)

Uploaded CPython 3.10musllinux: musl 1.1+ x86-64

rugo-0.1.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

rugo-0.1.11-cp310-cp310-macosx_11_0_arm64.whl (326.7 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

rugo-0.1.11-cp39-cp39-musllinux_1_1_x86_64.whl (4.1 MB view details)

Uploaded CPython 3.9musllinux: musl 1.1+ x86-64

rugo-0.1.11-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

rugo-0.1.11-cp39-cp39-macosx_11_0_arm64.whl (327.0 kB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

File details

Details for the file rugo-0.1.11.tar.gz.

File metadata

  • Download URL: rugo-0.1.11.tar.gz
  • Upload date:
  • Size: 412.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.11.tar.gz
Algorithm Hash digest
SHA256 29e92bc85d13abda677c11a5a38be6be2a3918a4c7d28eabe39c9b9b8c25ec0a
MD5 f03edd22fda14cb9055954a1f5af619a
BLAKE2b-256 9a3a254492b8ba9d1e7a59bc14d85f110321b08c95dbcf79c44b2f3f1d87c540

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11.tar.gz:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp312-cp312-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp312-cp312-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 733c8e89c7147679419f6e7d0a9e271dda4b91a8cf4bbbbdc4676737c874c84c
MD5 0a8022c9b7c24321b763c3c538bef584
BLAKE2b-256 22d13a48d3f459e3064c5a0b844dd12a749620b06d66cb295bfd1f052813d257

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp312-cp312-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ec0f3f74384be68b5a449cf84600603dd8da9024345addc09ea27a4986210f5a
MD5 2d63f7e7130cf4fcf04fca9e439a3155
BLAKE2b-256 0ea579e2896fe9a0193887d8d93e55b372cb64482deb99d25fc2d1eeec86b441

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 6fba89c89d1878c75ce7a4a0818bc3620c17f24ddc4820f3be92cc558312a466
MD5 818fd37e098032c0aeb03d37de56cf41
BLAKE2b-256 9f621b7173ce2ce2b2188bf0c9d8d85116a74a74cd7fd3f6ca053b01c71c2895

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp311-cp311-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp311-cp311-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 43eac0491c0b35aed3605455eb9c960a5dad4279c78563f97cd5a6a7ec65ddb9
MD5 e595df65d9e7b46192409a1da13edc7a
BLAKE2b-256 1b91213bbccddffb6768f0ed274cf30ffff605cd57089dacf5f901d0d002e1e0

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp311-cp311-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a8890de4bf5732185a7db7a223253a8e8d87665a90dc23128bc7d4cf8dd7d05e
MD5 8e9091165023898ae07d74d03f9762cf
BLAKE2b-256 7cf5f1745721d564377a9a5a64140990ecf300bd3eea1eb366a870702b6feff9

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 73a17e1a7642c6e5ce5609477cd9a1755228a47a6c1ec31d58145196dec65de7
MD5 52928825c3fb6ade82cc79339c28fb7b
BLAKE2b-256 dc5b5de3017c4f45e92608884e20180c6c362285cad92d9d261a32d8ad3a9fed

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp310-cp310-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp310-cp310-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 b4f80436776280bdba288d913603b8d9c8c9fe79273af4c5dfe14835ba264d58
MD5 6b32dab527f001ef1462c82327b85c8b
BLAKE2b-256 bdd1f19f9c231d110863624813c579a0ff8707ffac91af252847c096e5712dd3

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp310-cp310-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ace015452f6106071ea012b529eebb81a04240316d5c27e5b8efb13b51e35a7f
MD5 9a93dabf48c21615a1cb02f059ac443c
BLAKE2b-256 8f79e086eeec642d578521aa23bf5b71741f00bb9b6449443bf1b84c0e435314

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 91190329a6cfcafec7b7210e0606407bda6a3f9796a404ace7e6288b60c624ec
MD5 96a981159a34ac7aec9d6dfa89beae30
BLAKE2b-256 cb3f57f16c333f259ec6ab144ab17bf8692560afd4234491975f4046c3b768be

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp310-cp310-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp39-cp39-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp39-cp39-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 7c57f52635158b5ee55ce9ca008ab66b2b87521b4950086a19ccaba6bbfec232
MD5 7d18f79a88d0e536a1fd33ae7768bb67
BLAKE2b-256 8ab7cbca64576ca91da9e09ce4f896989d2eb4751b04007c08fa3c28fa51ee61

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp39-cp39-musllinux_1_1_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for rugo-0.1.11-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ffa90d9fdc05d3f126f851e694c3a424d9dcc86ece5485ecc8f7f1d559702850
MD5 c3d00e777f34042cabf31b1c55718ef4
BLAKE2b-256 722684b9e3e60a99dc41c3b2a2a614abb8a050cf57d1f18afc9f43c2ae93033f

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rugo-0.1.11-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

  • Download URL: rugo-0.1.11-cp39-cp39-macosx_11_0_arm64.whl
  • Upload date:
  • Size: 327.0 kB
  • Tags: CPython 3.9, macOS 11.0+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rugo-0.1.11-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8c092c48a8df2af96d1477c9bcc95c617aaa3e6034bbe9b017234a010bb75a22
MD5 31bfa1f7e06c3ca71ef76745aefcd751
BLAKE2b-256 5f647aacdd4082c6b79f63f009070ce0a9ec17d47532686cf716a0085641dd65

See more details on using hashes here.

Provenance

The following attestation bundles were made for rugo-0.1.11-cp39-cp39-macosx_11_0_arm64.whl:

Publisher: release.yml on mabel-dev/rugo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page