ARM-native bioinformatics library with streaming architecture and evidence-based optimization
Project description
biometal
ARM-native bioinformatics library with streaming architecture and evidence-based optimization
What Makes biometal Different?
Most bioinformatics tools require you to download entire datasets before analysis. biometal streams data directly from the network, enabling analysis of terabyte-scale datasets on consumer hardware without downloading.
Key Features
-
Streaming Architecture (Rule 5)
- Constant ~5 MB memory footprint regardless of dataset size
- Analyze 5TB datasets on laptops without downloading
- 99.5% memory reduction compared to batch processing
-
ARM-Native Performance (Rule 1)
- 16-25× speedup using ARM NEON SIMD
- Works across Mac (Apple Silicon), AWS Graviton, Ampere, Raspberry Pi
- Automatic fallback to scalar code on x86_64
-
Network Streaming (Rule 6)
- Stream directly from HTTP/HTTPS sources
- SRA toolkit integration (no local copy needed)
- Smart LRU caching minimizes network requests
- Background prefetching hides latency
-
Intelligent I/O (Rules 3-4)
- 6.5× speedup from parallel bgzip decompression
- Additional 2.5× from memory-mapped I/O (large files on macOS)
- Combined 16.3× I/O speedup
-
Evidence-Based Design
- Every optimization validated with statistical rigor (N=30, 95% CI)
- 1,357 experiments, 40,710 measurements
- Full methodology: apple-silicon-bio-bench
Quick Start
Rust Installation
[dependencies]
biometal = "1.0"
Python Installation
# Install from PyPI
pip install biometal-rs
# Then import as 'biometal'
python -c "import biometal; print(biometal.__version__)"
Note: The package name is
biometal-rson PyPI (thebiometalname was already taken), but you import it asbiometalin your Python code. See FAQ for details.
Alternative - Build from source:
pip install maturin
git clone https://github.com/shandley/biometal
cd biometal
maturin develop --release --features python
Requirements:
- Python 3.9+ (tested on 3.14)
- Rust toolchain (for building from source)
Usage
Rust: Basic Usage
use biometal::FastqStream;
// Stream FASTQ from local file (constant memory)
let stream = FastqStream::from_path("large_dataset.fq.gz")?;
for record in stream {
let record = record?;
// Process one record at a time
// Memory stays constant at ~5 MB
}
Network Streaming
use biometal::io::DataSource;
use biometal::FastqStream;
// Stream directly from URL (no download!)
let source = DataSource::Http("https://example.com/huge_dataset.fq.gz".to_string());
let stream = FastqStream::new(source)?;
// Analyze 5TB dataset without downloading
for record in stream {
// Smart caching + prefetching in background
}
SRA Streaming (No Download!)
use biometal::io::DataSource;
use biometal::FastqStream;
// Stream directly from NCBI SRA (no local download!)
let source = DataSource::Sra("SRR390728".to_string()); // E. coli dataset
let stream = FastqStream::new(source)?;
for record in stream {
let record = record?;
// Process 40 MB dataset with only ~5 MB memory
// Background prefetching hides network latency
}
Operations with Auto-Optimization
use biometal::operations;
// ARM NEON automatically enabled on ARM platforms
let counts = operations::base_counting(&sequence)?;
let gc = operations::gc_content(&sequence)?;
// 16-25× faster on ARM, automatic scalar fallback on x86_64
Python: Basic Usage
import biometal
# Stream FASTQ from local file (constant memory)
stream = biometal.FastqStream.from_path("large_dataset.fq.gz")
for record in stream:
# Process one record at a time
# Memory stays constant at ~5 MB
gc = biometal.gc_content(record.sequence)
print(f"{record.id}: GC={gc:.2%}")
Python: ARM NEON Operations
import biometal
# ARM NEON automatically enabled on ARM platforms
# 16-25× faster on Mac ARM, automatic scalar fallback on x86_64
# GC content calculation
sequence = b"ATGCATGC"
gc = biometal.gc_content(sequence) # 20.3× speedup on ARM
# Base counting
counts = biometal.count_bases(sequence) # 16.7× speedup on ARM
print(f"A:{counts['A']}, C:{counts['C']}, G:{counts['G']}, T:{counts['T']}")
# Quality scoring
quality = record.quality
mean_q = biometal.mean_quality(quality) # 25.1× speedup on ARM
# K-mer extraction (for ML preprocessing)
kmers = biometal.extract_kmers(sequence, k=6)
print(f"6-mers: {kmers}")
Python: Example Workflow
import biometal
# Analyze FASTQ file with streaming (constant memory)
stream = biometal.FastqStream.from_path("data.fq.gz")
total_bases = 0
total_gc = 0.0
high_quality = 0
for record in stream:
# Count bases (ARM NEON accelerated)
counts = biometal.count_bases(record.sequence)
total_bases += sum(counts.values())
# Calculate GC content (ARM NEON accelerated)
gc = biometal.gc_content(record.sequence)
total_gc += gc
# Check quality (ARM NEON accelerated)
if biometal.mean_quality(record.quality) > 30.0:
high_quality += 1
print(f"Total bases: {total_bases}")
print(f"Average GC: {total_gc/len(stream):.2%}")
print(f"High quality reads: {high_quality}")
K-mer Operations (Evidence-Based)
biometal provides k-mer operations optimized based on ASBB Entry 034 findings.
Key finding: K-mer operations are data-structure-bound (hash+HashMap), not compute-bound. Unlike element-wise operations (base counting, GC content), k-mers spend 50-60% of runtime on hash computation and 30-40% on data structure operations. Therefore, NEON/GPU provide no benefit.
Rust: K-mer Operations
use biometal::operations::kmer::{extract_kmers, extract_minimizers, kmer_spectrum, KmerExtractor};
// 1. Simple k-mer extraction (scalar-only, optimal)
let sequence = b"ATGCATGCATGC";
let kmers = extract_kmers(sequence, 6); // Returns Vec<Vec<u8>>
// 2. Minimizers (minimap2-style, scalar-only)
let minimizers = extract_minimizers(sequence, 6, 10); // k=6, w=10
for minimizer in minimizers {
println!("Position {}: {:?}", minimizer.position, minimizer.kmer);
}
// 3. K-mer spectrum (frequency counting, scalar-only)
let sequences = vec![b"ATGCAT".as_ref(), b"GCATGC".as_ref()];
let spectrum = kmer_spectrum(&sequences, 3); // HashMap<Vec<u8>, usize>
// 4. Parallel extraction (opt-in for large datasets, 2.2× speedup)
let extractor = KmerExtractor::with_parallel(4); // 4 threads (optimal per Entry 034)
let large_dataset: Vec<&[u8]> = /* 10K+ sequences */;
let kmers = extractor.extract(&large_dataset, 6); // 2.2× faster
Python: K-mer Operations
import biometal
# 1. Simple k-mer extraction (scalar-only, optimal)
sequence = b"ATGCATGCATGC"
kmers = biometal.extract_kmers(sequence, k=6) # Returns list[bytes]
print(f"Extracted {len(kmers)} k-mers")
# 2. Minimizers (minimap2-style, scalar-only)
minimizers = biometal.extract_minimizers(sequence, k=6, w=10)
for m in minimizers:
print(f"Position {m['position']}: {m['kmer']}")
# 3. K-mer spectrum (frequency counting, scalar-only)
sequences = [b"ATGCAT", b"GCATGC"]
spectrum = biometal.kmer_spectrum(sequences, k=3) # Returns dict
print(f"Unique k-mers: {len(spectrum)}")
# 4. Parallel extraction (opt-in for large datasets, 2.2× speedup)
extractor = biometal.KmerExtractor(parallel=True, threads=4)
large_dataset = [...] # 10K+ sequences
kmers = extractor.extract(large_dataset, k=6) # 2.2× faster
Evidence (Entry 034):
- Minimizers: 1.02-1.26× (NEON/Parallel) → Scalar-only
- K-mer Spectrum: 0.95-1.88× (sometimes SLOWER with parallel!) → Scalar-only
- K-mer Extraction: 2.19-2.38× (Parallel-4t) → Opt-in parallel
This validates minimap2's scalar design and identifies a 2.2× optimization opportunity for DNABert preprocessing.
Performance
Memory Efficiency
| Dataset Size | Traditional | biometal | Reduction |
|---|---|---|---|
| 100K sequences | 134 MB | 5 MB | 96.3% |
| 1M sequences | 1,344 MB | 5 MB | 99.5% |
| 5TB dataset | 5,000 GB | 5 MB | 99.9999% |
ARM NEON Speedup (Mac Apple Silicon)
Optimized for Apple Silicon - All optimizations validated on Mac M3 Max (1,357 experiments, N=30):
| Operation | Scalar | NEON | Speedup |
|---|---|---|---|
| Base counting | 315 Kseq/s | 5,254 Kseq/s | 16.7× |
| GC content | 294 Kseq/s | 5,954 Kseq/s | 20.3× |
| Quality filter | 245 Kseq/s | 6,143 Kseq/s | 25.1× |
Cross-Platform Performance (Validated Nov 2025)
| Platform | Base Counting | GC Content | Quality | Status |
|---|---|---|---|---|
| Mac M3 (target) | 16.7× | 20.3× | 25.1× | ✅ Optimized |
| AWS Graviton | 10.7× | 6.9× | 1.9× | ✅ Works (portable) |
| x86_64 Intel | 1.0× | 1.0× | 1.0× | ✅ Works (portable) |
Note: biometal is optimized for Mac ARM (consumer hardware democratization). Other platforms are supported with correct, production-ready code but not specifically optimized. See Cross-Platform Testing Results for details.
I/O Optimization
| File Size | Standard | Optimized | Speedup |
|---|---|---|---|
| Small (<50 MB) | 12.3s | 1.9s | 6.5× |
| Large (≥50 MB) | 12.3s | 0.75s | 16.3× |
Democratizing Bioinformatics
biometal addresses four barriers that lock researchers out of genomics:
1. Economic Barrier
- Problem: Most tools require $50K+ servers
- Solution: Consumer ARM laptops ($1,400) deliver production performance
- Impact: Small labs and LMIC researchers can compete
2. Environmental Barrier
- Problem: HPC clusters consume massive energy (300× excess for many workloads)
- Solution: ARM efficiency inherent in architecture
- Impact: Reduced carbon footprint for genomics research
3. Portability Barrier
- Problem: Vendor lock-in (x86-only, cloud-only tools)
- Solution: Works across ARM ecosystem (Mac, Graviton, Ampere, RPi)
- Impact: No platform dependencies, true portability
4. Data Access Barrier ⭐
- Problem: 5TB datasets require 5TB storage + days to download
- Solution: Network streaming with smart caching
- Impact: Analyze 5TB datasets on 24GB laptops without downloading
Evidence Base
biometal's design is grounded in comprehensive experimental validation:
- Experiments: 1,357 total (40,710 measurements with N=30)
- Statistical rigor: 95% confidence intervals, Cohen's d effect sizes
- Cross-platform: Mac M4 Max, AWS Graviton 3
- Lab notebook: 33 entries documenting full experimental log
See OPTIMIZATION_RULES.md for detailed evidence links.
Full methodology: apple-silicon-bio-bench
Publications (in preparation):
- DAG Framework: BMC Bioinformatics
- biometal Library: Bioinformatics (Application Note) or JOSS
- Four-Pillar Democratization: GigaScience
Platform Support
Optimization Strategy
biometal is optimized for Mac ARM (M1/M2/M3/M4) based on 1,357 experiments on Mac M3 Max. This aligns with our democratization mission: enable world-class bioinformatics on affordable consumer hardware ($1,000-2,000 MacBooks, not $50,000 servers).
Other platforms are supported with portable, correct code but not specifically optimized:
| Platform | Performance | Test Status | Strategy |
|---|---|---|---|
| Mac ARM (M1/M2/M3/M4) | 16-25× speedup | ✅ 121/121 tests pass | Optimized (target platform) |
| AWS Graviton | 6-10× speedup | ✅ 121/121 tests pass | Portable (works well) |
| Linux x86_64 | 1× (scalar) | ✅ 118/118 tests pass | Portable (fallback) |
Feature Support Matrix
| Feature | macOS ARM | Linux ARM | Linux x86_64 |
|---|---|---|---|
| ARM NEON SIMD | ✅ | ✅ | ❌ (scalar fallback) |
| Parallel Bgzip | ✅ | ✅ | ✅ |
| Smart mmap | ✅ | ⏳ | ❌ |
| Network Streaming | ✅ | ✅ | ✅ |
| Python Bindings | ✅ | ✅ | ✅ |
Validation: Cross-platform testing completed Nov 2025 on AWS Graviton 3 and x86_64. All tests pass. See results/cross_platform/FINDINGS.md for full details.
Roadmap
v1.0.0 (Released November 5, 2025) ✅
- Streaming FASTQ/FASTA parsers (constant memory)
- ARM NEON operations (16-25× speedup)
- Network streaming (HTTP/HTTPS, SRA)
- Python bindings (PyO3 0.27, Python 3.9-3.14)
- Cross-platform validation (Mac ARM, Graviton, x86_64)
- Production-grade quality (121 tests, Grade A+)
Future Considerations (Community Driven)
- Extended operation coverage (alignment, assembly)
- Additional format support (BAM/SAM, VCF)
- Publish to crates.io and PyPI
- Metal GPU acceleration (Mac-specific)
SRA Streaming: Analysis Without Downloads
One of biometal's most powerful features is direct streaming from NCBI's Sequence Read Archive (SRA) without local downloads.
Why This Matters
Traditional workflow:
- Download 5 GB SRA dataset → 30 minutes + 5 GB disk space
- Decompress → 15 GB disk space
- Process → Additional memory
- Total: 45 minutes + 20 GB resources before analysis even starts
biometal workflow:
- Start analysis immediately → 0 wait time, ~5 MB memory
- Stream directly from NCBI S3 → No disk space needed
- Background prefetching hides latency → Near-local performance
Supported Accessions
- SRR (Run): Most common, represents a sequencing run
- SRX (Experiment): Collection of runs
- SRS (Sample): Biological sample
- SRP (Study): Collection of experiments
Basic SRA Usage
use biometal::io::DataSource;
use biometal::operations::{count_bases, gc_content};
use biometal::FastqStream;
// Stream from SRA accession
let source = DataSource::Sra("SRR390728".to_string());
let stream = FastqStream::new(source)?;
for record in stream {
let record = record?;
// ARM NEON-optimized operations (16-25× speedup)
let bases = count_bases(&record.sequence);
let gc = gc_content(&record.sequence);
// Memory: Constant ~5 MB
}
Real-World Example: E. coli Analysis
# Run the E. coli streaming example
cargo run --example sra_ecoli --features network
# Process ~250,000 reads with only ~5 MB memory
# No download required!
See examples/sra_ecoli.rs for complete example.
Performance Tuning
biometal automatically configures optimal settings for most use cases. For custom tuning:
use biometal::io::{HttpReader, sra_to_url};
let url = sra_to_url("SRR390728")?;
let reader = HttpReader::new(&url)?
.with_prefetch_count(8) // Prefetch 8 blocks ahead
.with_chunk_size(128 * 1024); // 128 KB chunks
// See docs/PERFORMANCE_TUNING.md for detailed guide
SRA URL Conversion
use biometal::io::{is_sra_accession, sra_to_url};
// Check if string is SRA accession
if is_sra_accession("SRR390728") {
// Convert to direct NCBI S3 URL
let url = sra_to_url("SRR390728")?;
// → https://sra-pub-run-odp.s3.amazonaws.com/sra/SRR390728/SRR390728
}
Memory Guarantees
- Streaming buffer: ~5 MB (constant)
- LRU cache: 50 MB (byte-bounded, automatic eviction)
- Prefetch: ~256 KB (4 blocks × 64 KB)
- Total: ~55 MB regardless of SRA file size
Compare to downloading a 5 GB SRA file → 99%+ memory savings
Examples
| Example | Dataset | Size | Demo |
|---|---|---|---|
| sra_streaming.rs | Demo mode | N/A | Capabilities overview |
| sra_ecoli.rs | E. coli K-12 | ~40 MB | Real SRA streaming |
| prefetch_tuning.rs | E. coli K-12 | ~40 MB | Performance tuning |
Example Use Cases
1. Large-Scale Quality Control
use biometal::{FastqStream, operations};
// Stream 5TB dataset without downloading
let stream = FastqStream::from_url("https://sra.example.com/huge.fq.gz")?;
let mut total = 0;
let mut high_quality = 0;
for record in stream {
let record = record?;
total += 1;
// ARM NEON accelerated (16-25×)
if operations::mean_quality(&record.quality) > 30.0 {
high_quality += 1;
}
}
println!("High quality: {}/{} ({:.1}%)",
high_quality, total, 100.0 * high_quality as f64 / total as f64);
2. BERT Preprocessing Pipeline (DNABert/ML)
use biometal::{FastqStream, operations::kmer};
use biometal::io::DataSource;
// Stream from SRA (no local copy!)
let source = DataSource::Sra("SRR12345678".to_string());
let stream = FastqStream::new(source)?;
// Extract k-mers for DNABert training
for record in stream {
let record = record?;
// Extract overlapping k-mers (Entry 034: scalar-only optimal)
let kmers = kmer::extract_kmers(&record.sequence, 6);
// Feed to BERT training pipeline immediately
// Constant memory even for TB-scale datasets (~5 MB)
}
Python equivalent:
import biometal
stream = biometal.FastqStream.from_path("dataset.fq.gz")
for record in stream:
# Extract k-mers for DNABert (k=3, 4, 5, or 6 typical)
kmers = biometal.extract_kmers(record.sequence, k=6)
# Feed to model - constant memory!
model.process(kmers)
For large batches (10K+ sequences), use parallel extraction:
# Opt-in parallel for 2.2× speedup (Entry 034)
extractor = biometal.KmerExtractor(parallel=True, threads=4)
sequences = [record.sequence for record in batch]
kmers = extractor.extract(sequences, k=6) # 2.2× faster
3. Metagenomics Filtering
use biometal::{FastqStream, operations};
let input = FastqStream::from_path("metagen.fq.gz")?;
let mut output = FastqWriter::create("filtered.fq.gz")?;
for record in input {
let record = record?;
// Filter low-complexity sequences (ARM NEON accelerated)
if operations::complexity_score(&record.sequence) > 0.5 {
output.write(&record)?;
}
}
// Memory: constant ~5 MB
// Speed: 16-25× faster on ARM
FAQ
Why is the package called biometal-rs on PyPI but biometal everywhere else?
The biometal name was already taken on PyPI when we published v1.0.0, so we used biometal-rs (following the Rust convention). However:
- GitHub repository:
shandley/biometal - Python import:
import biometal(notbiometal_rs) - Rust crate:
biometal - PyPI package:
biometal-rs(install name only)
This means you install with:
pip install biometal-rs
But use it as:
import biometal # Not biometal_rs!
This is a common pattern for Rust-based Python packages and provides the best user experience (clean import name).
What platforms are supported?
Pre-built wheels available for:
- macOS ARM (M1/M2/M3/M4) - Optimized with NEON (16-25× speedup)
- macOS x86_64 (Intel Macs) - Scalar fallback
- Linux x86_64 - Scalar fallback
Coming soon:
- Linux ARM (Graviton, Raspberry Pi) - Will be added in v1.0.1
Build from source: All other platforms can build from the source distribution (requires Rust toolchain).
Does it work on Windows?
Currently untested. Building from source may work with the Rust toolchain installed, but we haven't validated it. Community contributions for Windows support are welcome!
Why ARM-native? What about x86_64?
biometal is designed to democratize bioinformatics by enabling world-class performance on consumer hardware. Modern ARM laptops (like MacBooks with M-series chips) cost $1,400 vs $50,000+ for traditional HPC servers.
Performance philosophy:
- Mac ARM (M1/M2/M3/M4): Optimized target - 16-25× NEON speedup
- Other platforms: Correct, production-ready code with scalar fallback
The library works great on x86_64 (all tests pass), it's just not specifically optimized for it. Our mission is enabling field researchers, students, and small labs in LMICs to do cutting-edge work on affordable hardware.
How do I get support?
- Bug reports: GitHub Issues
- Questions: GitHub Discussions
- Documentation: https://docs.rs/biometal
Contributing
We welcome contributions! biometal is built on evidence-based optimization, so new features should:
- Have clear use cases
- Be validated experimentally (when adding optimizations)
- Maintain platform portability
- Follow the optimization rules in OPTIMIZATION_RULES.md
See CLAUDE.md for development guidelines.
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Citation
If you use biometal in your research, please cite:
@software{biometal2025,
author = {Handley, Scott},
title = {biometal: ARM-native bioinformatics with streaming architecture},
year = {2025},
url = {https://github.com/shandley/biometal}
}
For the experimental methodology, see:
@misc{asbb2025,
author = {Handley, Scott},
title = {Apple Silicon Bio Bench: Systematic Hardware Characterization for Bioinformatics},
year = {2025},
url = {https://github.com/shandley/apple-silicon-bio-bench}
}
Status: v1.0.0 - Production Release 🎉 Released: November 5, 2025 Grade: A+ (rust-code-quality-reviewer) Tests: 121 passing (87 unit + 7 integration + 27 doc) Evidence Base: 1,357 experiments, 40,710 measurements Mission: Democratizing bioinformatics compute
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file biometal_rs-1.1.0.tar.gz.
File metadata
- Download URL: biometal_rs-1.1.0.tar.gz
- Upload date:
- Size: 423.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7ce565b96433c87681b48b8813f7e49dbb480770244dc210931897c28b37c282
|
|
| MD5 |
665795451a9f5d2bedb551b6489ecf5d
|
|
| BLAKE2b-256 |
8f4992e00f2916b169a4beb9c486c7eea963f602ebcb96dd4a297be646d33e3f
|
Provenance
The following attestation bundles were made for biometal_rs-1.1.0.tar.gz:
Publisher:
publish-pypi.yml on shandley/biometal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
biometal_rs-1.1.0.tar.gz -
Subject digest:
7ce565b96433c87681b48b8813f7e49dbb480770244dc210931897c28b37c282 - Sigstore transparency entry: 678304783
- Sigstore integration time:
-
Permalink:
shandley/biometal@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Branch / Tag:
refs/tags/v1.1.0 - Owner: https://github.com/shandley
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Trigger Event:
release
-
Statement type:
File details
Details for the file biometal_rs-1.1.0-cp311-cp311-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: biometal_rs-1.1.0-cp311-cp311-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 3.4 MB
- Tags: CPython 3.11, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
50a4d6fa3ee1719b72ebccb6ff5ae5a49635c5373e6ec70ba366b5ad1de079e0
|
|
| MD5 |
7f3ac006029a6c37f598e7b59fb8c3c7
|
|
| BLAKE2b-256 |
d8d06419a618b35ab81ac7b435474ffb22b863e248705c85ef9bb71ed59321b1
|
Provenance
The following attestation bundles were made for biometal_rs-1.1.0-cp311-cp311-manylinux_2_34_x86_64.whl:
Publisher:
publish-pypi.yml on shandley/biometal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
biometal_rs-1.1.0-cp311-cp311-manylinux_2_34_x86_64.whl -
Subject digest:
50a4d6fa3ee1719b72ebccb6ff5ae5a49635c5373e6ec70ba366b5ad1de079e0 - Sigstore transparency entry: 678304862
- Sigstore integration time:
-
Permalink:
shandley/biometal@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Branch / Tag:
refs/tags/v1.1.0 - Owner: https://github.com/shandley
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Trigger Event:
release
-
Statement type:
File details
Details for the file biometal_rs-1.1.0-cp311-cp311-macosx_11_0_arm64.whl.
File metadata
- Download URL: biometal_rs-1.1.0-cp311-cp311-macosx_11_0_arm64.whl
- Upload date:
- Size: 1.1 MB
- Tags: CPython 3.11, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
85ecbcf7a466063f1838fdd5c9b1284c90b9f0af3b37b5191655aa7d03627710
|
|
| MD5 |
f582739885f7a985cba5bd47deb2af65
|
|
| BLAKE2b-256 |
4756763e0da68545afb679a581dc552aaff1ba15b84f10b584c57e11d93859a2
|
Provenance
The following attestation bundles were made for biometal_rs-1.1.0-cp311-cp311-macosx_11_0_arm64.whl:
Publisher:
publish-pypi.yml on shandley/biometal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
biometal_rs-1.1.0-cp311-cp311-macosx_11_0_arm64.whl -
Subject digest:
85ecbcf7a466063f1838fdd5c9b1284c90b9f0af3b37b5191655aa7d03627710 - Sigstore transparency entry: 678304814
- Sigstore integration time:
-
Permalink:
shandley/biometal@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Branch / Tag:
refs/tags/v1.1.0 - Owner: https://github.com/shandley
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Trigger Event:
release
-
Statement type:
File details
Details for the file biometal_rs-1.1.0-cp311-cp311-macosx_10_12_x86_64.whl.
File metadata
- Download URL: biometal_rs-1.1.0-cp311-cp311-macosx_10_12_x86_64.whl
- Upload date:
- Size: 1.1 MB
- Tags: CPython 3.11, macOS 10.12+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de94913609b1381743418fbcad92a55d530faf10030823ac3005b5787d6e4e23
|
|
| MD5 |
05762119d7e8f209ae59614f935396bc
|
|
| BLAKE2b-256 |
d96404da096b11fa1d17691b7e8e6a078a709a8e989c9f8c9d5a9442a7104316
|
Provenance
The following attestation bundles were made for biometal_rs-1.1.0-cp311-cp311-macosx_10_12_x86_64.whl:
Publisher:
publish-pypi.yml on shandley/biometal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
biometal_rs-1.1.0-cp311-cp311-macosx_10_12_x86_64.whl -
Subject digest:
de94913609b1381743418fbcad92a55d530faf10030823ac3005b5787d6e4e23 - Sigstore transparency entry: 678304843
- Sigstore integration time:
-
Permalink:
shandley/biometal@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Branch / Tag:
refs/tags/v1.1.0 - Owner: https://github.com/shandley
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@88b62879f7fc09eac50f1ce945ab7d3ad5d4fe7f -
Trigger Event:
release
-
Statement type: